diff --git a/docs/api/api_python/amp/mindspore.amp.DynamicLossScaler.rst b/docs/api/api_python/amp/mindspore.amp.DynamicLossScaler.rst index 1adca585edf..87d795fe223 100644 --- a/docs/api/api_python/amp/mindspore.amp.DynamicLossScaler.rst +++ b/docs/api/api_python/amp/mindspore.amp.DynamicLossScaler.rst @@ -24,7 +24,7 @@ mindspore.amp.DynamicLossScaler 教程样例: - `自动混合精度 - 损失缩放 - `_ + `_ .. py:method:: scale(inputs) @@ -38,7 +38,7 @@ mindspore.amp.DynamicLossScaler 教程样例: - `自动混合精度 - 损失缩放 - `_ + `_ .. py:method:: unscale(inputs) @@ -52,4 +52,4 @@ mindspore.amp.DynamicLossScaler 教程样例: - `自动混合精度 - 损失缩放 - `_ \ No newline at end of file + `_ \ No newline at end of file diff --git a/docs/api/api_python/amp/mindspore.amp.LossScaler.rst b/docs/api/api_python/amp/mindspore.amp.LossScaler.rst index 243d0b4a58d..f25dabad9b8 100644 --- a/docs/api/api_python/amp/mindspore.amp.LossScaler.rst +++ b/docs/api/api_python/amp/mindspore.amp.LossScaler.rst @@ -7,7 +7,7 @@ mindspore.amp.LossScaler 派生类需要实现该类的所有方法。训练过程中,`scale` 和 `unscale` 用于对损失值或梯度进行放大或缩小,以避免数据溢出;`adjust` 用于调整损失缩放系数 `scale_value` 的值。 - 关于使用 `LossScaler` 进行损失缩放,请查看 `教程 `_。 + 关于使用 `LossScaler` 进行损失缩放,请查看 `教程 `_。 .. warning:: 这是一个实验性API,后续可能修改或删除。 diff --git a/docs/api/api_python/amp/mindspore.amp.all_finite.rst b/docs/api/api_python/amp/mindspore.amp.all_finite.rst index c6e821ec5e2..99bb1815a49 100644 --- a/docs/api/api_python/amp/mindspore.amp.all_finite.rst +++ b/docs/api/api_python/amp/mindspore.amp.all_finite.rst @@ -18,4 +18,4 @@ mindspore.amp.all_finite 教程样例: - `自动混合精度 - 损失缩放 - `_ + `_ diff --git a/docs/api/api_python/amp/mindspore.amp.auto_mixed_precision.rst b/docs/api/api_python/amp/mindspore.amp.auto_mixed_precision.rst index e9a28c1a84f..61a875bfb6e 100644 --- a/docs/api/api_python/amp/mindspore.amp.auto_mixed_precision.rst +++ b/docs/api/api_python/amp/mindspore.amp.auto_mixed_precision.rst @@ -25,7 +25,7 @@ mindspore.amp.auto_mixed_precision [:class:`mindspore.nn.BatchNorm1d`, :class:`mindspore.nn.BatchNorm2d`, :class:`mindspore.nn.BatchNorm3d`, :class:`mindspore.nn.LayerNorm`] - 关于自动混合精度的详细介绍,请参考 `自动混合精度 `_ 。 + 关于自动混合精度的详细介绍,请参考 `自动混合精度 `_ 。 .. note:: - 重复调用混合精度接口,如 `custom_mixed_precision` 和 `auto_mixed_precision` ,可能导致网络层数增大,性能降低。 diff --git a/docs/api/api_python/dataset/dataset_method/operation/mindspore.dataset.Dataset.map.rst b/docs/api/api_python/dataset/dataset_method/operation/mindspore.dataset.Dataset.map.rst index c8751a9e189..69e06243649 100644 --- a/docs/api/api_python/dataset/dataset_method/operation/mindspore.dataset.Dataset.map.rst +++ b/docs/api/api_python/dataset/dataset_method/operation/mindspore.dataset.Dataset.map.rst @@ -12,9 +12,9 @@ mindspore.dataset.Dataset.map 最后一个数据增强的输出列的列名由 `output_columns` 指定,如果没有指定 `output_columns` ,输出列名与 `input_columns` 一致。 - 如果使用的是 `mindspore` `dataset` 提供的数据增强( - `vision类 `_ , - `nlp类 `_ , - `audio类 `_ ),请使用如下参数: + `vision类 `_ , + `nlp类 `_ , + `audio类 `_ ),请使用如下参数: .. image:: map_parameter_cn.png @@ -31,9 +31,9 @@ mindspore.dataset.Dataset.map - python_multiprocessing (bool, 可选) - 启用Python多进程模式加速map操作。当传入的 `operations` 计算量很大时,开启此选项可能会有较好效果。默认值: ``False`` 。 - max_rowsize (Union[int, list[int]], 可选) - 指定在多进程之间复制数据时,共享内存分配的基本单位,总占用的共享内存会随着 ``num_parallel_workers`` 和 :func:`mindspore.dataset.config.set_prefetch_size` 增加而变大,仅当 `python_multiprocessing` 为 ``True`` 时,该选项有效。如果是int值,代表 ``input_columns`` 和 ``output_columns`` 均使用该值为单位创建共享内存;如果是列表,第一个元素代表 ``input_columns`` 使用该值为单位创建共享内存,第二个元素代表 ``output_columns`` 使用该值为单位创建共享内存。默认值: ``16`` ,单位为MB。 - - cache (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - cache (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 - callbacks (DSCallback, list[DSCallback], 可选) - 要调用的Dataset回调函数列表。默认值: ``None`` 。 - - offload (bool, 可选) - 是否进行异构硬件加速,详情请阅读 `数据准备异构加速 `_ 。默认值: ``None`` 。 + - offload (bool, 可选) - 是否进行异构硬件加速,详情请阅读 `数据准备异构加速 `_ 。默认值: ``None`` 。 .. note:: - `operations` 参数接收 `TensorOperation` 类型的数据处理操作,以及用户定义的Python函数(PyFuncs)。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.AGNewsDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.AGNewsDataset.rst index 6dee8ec2169..fea14419aeb 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.AGNewsDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.AGNewsDataset.rst @@ -21,7 +21,7 @@ mindspore.dataset.AGNewsDataset - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 参数所指向的文件目录不存在或缺少数据集文件。 @@ -31,7 +31,7 @@ mindspore.dataset.AGNewsDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ **关于AGNews数据集:** diff --git a/docs/api/api_python/dataset/mindspore.dataset.AmazonReviewDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.AmazonReviewDataset.rst index 62cc75c3334..6ae530b08ce 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.AmazonReviewDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.AmazonReviewDataset.rst @@ -23,7 +23,7 @@ mindspore.dataset.AmazonReviewDataset - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 参数所指向的文件目录不存在或缺少数据集文件。 @@ -33,7 +33,7 @@ mindspore.dataset.AmazonReviewDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ **关于AmazonReview数据集:** diff --git a/docs/api/api_python/dataset/mindspore.dataset.CLUEDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.CLUEDataset.rst index 89bde257c50..1ce47b599f3 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.CLUEDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.CLUEDataset.rst @@ -22,7 +22,7 @@ mindspore.dataset.CLUEDataset - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 根据给定的 `task` 参数 和 `usage` 配置,数据集会生成不同的输出列: @@ -177,7 +177,7 @@ mindspore.dataset.CLUEDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ **关于CLUE数据集:** diff --git a/docs/api/api_python/dataset/mindspore.dataset.CMUArcticDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.CMUArcticDataset.rst index f223a6c51dd..7eaafc957d8 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.CMUArcticDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.CMUArcticDataset.rst @@ -21,7 +21,7 @@ - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` ,下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` ,不进行分片。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` ,将使用 ``0`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含任何数据文件。 @@ -34,7 +34,7 @@ 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: - 暂不支持指定 `sampler` 参数为 :class:`mindspore.dataset.PKSampler`。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.CSVDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.CSVDataset.rst index 062d981c7cd..d03e216c8cf 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.CSVDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.CSVDataset.rst @@ -23,7 +23,7 @@ - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_files` 参数所指向的文件无效或不存在。 @@ -35,6 +35,6 @@ 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. include:: mindspore.dataset.api_list_nlp.rst diff --git a/docs/api/api_python/dataset/mindspore.dataset.Caltech101Dataset.rst b/docs/api/api_python/dataset/mindspore.dataset.Caltech101Dataset.rst index 34e57df076c..e3df1cd86d2 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.Caltech101Dataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.Caltech101Dataset.rst @@ -39,7 +39,7 @@ mindspore.dataset.Caltech101Dataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.Caltech256Dataset.rst b/docs/api/api_python/dataset/mindspore.dataset.Caltech256Dataset.rst index 9f040212cf5..75bcd11eabc 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.Caltech256Dataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.Caltech256Dataset.rst @@ -16,7 +16,7 @@ mindspore.dataset.Caltech256Dataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含任何数据文件。 @@ -30,7 +30,7 @@ mindspore.dataset.Caltech256Dataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.CelebADataset.rst b/docs/api/api_python/dataset/mindspore.dataset.CelebADataset.rst index 8630b44cce7..491ce8599f8 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.CelebADataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.CelebADataset.rst @@ -19,7 +19,7 @@ mindspore.dataset.CelebADataset - **num_samples** (int, 可选) - 指定从数据集中读取的样本数,可以小于数据集总数。默认值: ``None`` ,读取全部样本图片。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 - **decrypt** (callable, 可选) - 图像解密函数,接受加密的图片路径并返回bytes类型的解密数据。默认值: ``None`` ,不进行解密。 异常: @@ -34,7 +34,7 @@ mindspore.dataset.CelebADataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.Cifar100Dataset.rst b/docs/api/api_python/dataset/mindspore.dataset.Cifar100Dataset.rst index 1f790f44605..722a9659ef0 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.Cifar100Dataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.Cifar100Dataset.rst @@ -17,11 +17,11 @@ mindspore.dataset.Cifar100Dataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ 异常: - **RuntimeError** - `dataset_dir` 路径下不包含数据文件。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.Cifar10Dataset.rst b/docs/api/api_python/dataset/mindspore.dataset.Cifar10Dataset.rst index 1487095823c..6b55079bcd9 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.Cifar10Dataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.Cifar10Dataset.rst @@ -18,11 +18,11 @@ mindspore.dataset.Cifar10Dataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ 异常: - **RuntimeError** - `dataset_dir` 路径下不包含数据文件。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.CityscapesDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.CityscapesDataset.rst index 61d1d261cbd..647d69a6f10 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.CityscapesDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.CityscapesDataset.rst @@ -21,11 +21,11 @@ mindspore.dataset.CityscapesDataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ 异常: - **RuntimeError** - `dataset_dir` 路径下不包含任何数据文件。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.CoNLL2000Dataset.rst b/docs/api/api_python/dataset/mindspore.dataset.CoNLL2000Dataset.rst index d0816ac2a65..16663ca4a90 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.CoNLL2000Dataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.CoNLL2000Dataset.rst @@ -22,7 +22,7 @@ mindspore.dataset.CoNLL2000Dataset - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。指定此参数后, `num_samples` 表示每个分片的最大样本数。默认值: ``None`` 。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。只有当指定了 `num_shards` 时才能指定此参数。默认值: ``None`` 。 - **num_parallel_workers** (int, 可选) - 指定读取数据的工作线程数。默认值: ``None`` ,使用全局默认线程数(8),也可以通过 :func:`mindspore.dataset.config.set_num_parallel_workers` 配置全局线程数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 参数所指向的文件目录不存在或缺少数据集文件。 @@ -32,7 +32,7 @@ mindspore.dataset.CoNLL2000Dataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ **关于CoNLL2000数据集:** diff --git a/docs/api/api_python/dataset/mindspore.dataset.CocoDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.CocoDataset.rst index ee20bf15bff..070354786a5 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.CocoDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.CocoDataset.rst @@ -18,7 +18,7 @@ - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` ,表2中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 - **extra_metadata** (bool, 可选) - 用于指定是否额外输出一个数据列用于表示图片元信息。如果为True,则将额外输出一个名为 `[_meta-filename, dtype=string]` 的数据列。默认值: ``False`` 。 - **decrypt** (callable, 可选) - 图像解密函数,接受加密的图片路径并返回bytes类型的解密数据。默认值: ``None`` ,不进行解密。 @@ -77,7 +77,7 @@ 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: - 当参数 `extra_metadata` 为 ``True`` 时,还需使用 `rename` 操作删除额外数据列 '_meta-filename'的前缀 '_meta-', diff --git a/docs/api/api_python/dataset/mindspore.dataset.DBpediaDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.DBpediaDataset.rst index 7eeb2c11b83..3e65b321ad4 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.DBpediaDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.DBpediaDataset.rst @@ -22,7 +22,7 @@ mindspore.dataset.DBpediaDataset - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 参数所指向的文件目录不存在或缺少数据集文件。 @@ -33,7 +33,7 @@ mindspore.dataset.DBpediaDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ **关于DBpedia数据集:** diff --git a/docs/api/api_python/dataset/mindspore.dataset.DIV2KDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.DIV2KDataset.rst index 0b6b7d382cd..390897bd71e 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.DIV2KDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.DIV2KDataset.rst @@ -20,7 +20,7 @@ mindspore.dataset.DIV2KDataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含任何数据文件。 @@ -39,7 +39,7 @@ mindspore.dataset.DIV2KDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.DatasetCache.rst b/docs/api/api_python/dataset/mindspore.dataset.DatasetCache.rst index 8c0821bd104..bb0820e2e3b 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.DatasetCache.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.DatasetCache.rst @@ -5,7 +5,7 @@ mindspore.dataset.DatasetCache 创建数据缓存客户端实例。 - 关于单节点数据缓存的使用,请参阅 `单节点数据缓存教程 `_ 。 + 关于单节点数据缓存的使用,请参阅 `单节点数据缓存教程 `_ 。 参数: - **session_id** (int) - 当前数据缓存客户端的会话ID,用户在命令行开启缓存服务端后可通过 `cache_admin -g` 获取。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.EMnistDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.EMnistDataset.rst index daa467d8e5e..6e773bc37b4 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.EMnistDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.EMnistDataset.rst @@ -18,7 +18,7 @@ mindspore.dataset.EMnistDataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - 同时指定了 `sampler` 和 `shuffle` 参数。 @@ -29,7 +29,7 @@ mindspore.dataset.EMnistDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.EnWik9Dataset.rst b/docs/api/api_python/dataset/mindspore.dataset.EnWik9Dataset.rst index 8174f88c98a..f681929fc37 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.EnWik9Dataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.EnWik9Dataset.rst @@ -20,7 +20,7 @@ mindspore.dataset.EnWik9Dataset - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 参数所指向的文件目录不存在或缺少数据集文件。 @@ -30,7 +30,7 @@ mindspore.dataset.EnWik9Dataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ **关于EnWik9数据集:** diff --git a/docs/api/api_python/dataset/mindspore.dataset.FakeImageDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.FakeImageDataset.rst index a538078e51f..ece650b8a98 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.FakeImageDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.FakeImageDataset.rst @@ -18,7 +18,7 @@ mindspore.dataset.FakeImageDataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - 同时指定了 `sampler` 和 `shuffle` 参数。 @@ -30,7 +30,7 @@ mindspore.dataset.FakeImageDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.FashionMnistDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.FashionMnistDataset.rst index 2352d3c5e0c..9153228542b 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.FashionMnistDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.FashionMnistDataset.rst @@ -17,7 +17,7 @@ mindspore.dataset.FashionMnistDataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含数据文件。 @@ -30,7 +30,7 @@ mindspore.dataset.FashionMnistDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.FlickrDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.FlickrDataset.rst index 6d0a4b0182a..ab7a9f3d197 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.FlickrDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.FlickrDataset.rst @@ -17,7 +17,7 @@ - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` ,表2中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含任何数据文件。 @@ -32,7 +32,7 @@ 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.Flowers102Dataset.rst b/docs/api/api_python/dataset/mindspore.dataset.Flowers102Dataset.rst index cbc0c36ec06..461b18bcb16 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.Flowers102Dataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.Flowers102Dataset.rst @@ -33,7 +33,7 @@ mindspore.dataset.Flowers102Dataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.Food101Dataset.rst b/docs/api/api_python/dataset/mindspore.dataset.Food101Dataset.rst index df441111426..7b151d56ce5 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.Food101Dataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.Food101Dataset.rst @@ -18,7 +18,7 @@ mindspore.dataset.Food101Dataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` ,下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含数据文件。 @@ -33,7 +33,7 @@ mindspore.dataset.Food101Dataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.GTZANDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.GTZANDataset.rst index dbc0e241d9a..219b5304cdb 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.GTZANDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.GTZANDataset.rst @@ -19,7 +19,7 @@ - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` ,下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含任何数据文件。 @@ -32,7 +32,7 @@ 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: - 暂不支持指定 `sampler` 参数为 :class:`mindspore.dataset.PKSampler`。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.GeneratorDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.GeneratorDataset.rst index 5c93a085e8f..5d0a478bda6 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.GeneratorDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.GeneratorDataset.rst @@ -37,7 +37,7 @@ 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: - 如果配置 `python_multiprocessing=True` (默认值: ``True`` ) 和 `num_parallel_workers>1` (默认值:1) 表示启动了多进程方式进行数据load加速, diff --git a/docs/api/api_python/dataset/mindspore.dataset.IMDBDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.IMDBDataset.rst index 6f95995d13d..2ceaaad0b19 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.IMDBDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.IMDBDataset.rst @@ -16,7 +16,7 @@ mindspore.dataset.IMDBDataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 参数所指向的文件目录不存在或缺少数据集文件。 @@ -29,7 +29,7 @@ mindspore.dataset.IMDBDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.IWSLT2016Dataset.rst b/docs/api/api_python/dataset/mindspore.dataset.IWSLT2016Dataset.rst index 9febd470041..6207084bd5e 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.IWSLT2016Dataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.IWSLT2016Dataset.rst @@ -24,7 +24,7 @@ mindspore.dataset.IWSLT2016Dataset - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - **num_parallel_workers** (int, 可选) - 指定读取数据的工作线程数。默认值: ``None`` ,使用全局默认线程数(8),也可以通过 :func:`mindspore.dataset.config.set_num_parallel_workers` 配置全局线程数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 参数所指向的文件目录不存在或缺少数据集文件。 @@ -34,7 +34,7 @@ mindspore.dataset.IWSLT2016Dataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ **关于IWSLT2016数据集:** diff --git a/docs/api/api_python/dataset/mindspore.dataset.IWSLT2017Dataset.rst b/docs/api/api_python/dataset/mindspore.dataset.IWSLT2017Dataset.rst index a631889546e..b6c8fe02bdc 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.IWSLT2017Dataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.IWSLT2017Dataset.rst @@ -25,7 +25,7 @@ mindspore.dataset.IWSLT2017Dataset - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - **num_parallel_workers** (int, 可选) - 指定读取数据的工作线程数。默认值: ``None`` ,使用全局默认线程数(8),也可以通过 :func:`mindspore.dataset.config.set_num_parallel_workers` 配置全局线程数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 参数所指向的文件目录不存在或缺少数据集文件。 @@ -35,7 +35,7 @@ mindspore.dataset.IWSLT2017Dataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ **关于IWSLT2017数据集:** diff --git a/docs/api/api_python/dataset/mindspore.dataset.ImageFolderDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.ImageFolderDataset.rst index 8717018025f..f02f977977b 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.ImageFolderDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.ImageFolderDataset.rst @@ -18,7 +18,7 @@ mindspore.dataset.ImageFolderDataset - **decode** (bool, 可选) - 是否对读取的图片进行解码操作。默认值: ``False`` ,不解码。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 - **decrypt** (callable, 可选) - 图像解密函数,接受加密的图片路径并返回bytes类型的解密数据。默认值: ``None`` ,不进行解密。 异常: @@ -33,7 +33,7 @@ mindspore.dataset.ImageFolderDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: - 如果 `decode` 参数的值为 ``False`` ,则得到的 `image` 列的shape为[undecoded_image_size],如果为True则 `image` 列的shape为[H,W,C]。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.KITTIDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.KITTIDataset.rst index eb09ae3dd7a..202566fd842 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.KITTIDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.KITTIDataset.rst @@ -26,7 +26,7 @@ mindspore.dataset.KITTIDataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - 同时指定了 `sampler` 和 `shuffle` 参数。 @@ -38,7 +38,7 @@ mindspore.dataset.KITTIDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.KMnistDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.KMnistDataset.rst index c067d268a1f..7374d11524f 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.KMnistDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.KMnistDataset.rst @@ -17,7 +17,7 @@ mindspore.dataset.KMnistDataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含数据文件。 @@ -30,7 +30,7 @@ mindspore.dataset.KMnistDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.LFWDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.LFWDataset.rst index 7f485e272a9..98923907159 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.LFWDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.LFWDataset.rst @@ -24,7 +24,7 @@ mindspore.dataset.LFWDataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含数据文件。 @@ -36,7 +36,7 @@ mindspore.dataset.LFWDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.LJSpeechDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.LJSpeechDataset.rst index 337a2eefd44..92176a5a5ea 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.LJSpeechDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.LJSpeechDataset.rst @@ -16,7 +16,7 @@ mindspore.dataset.LJSpeechDataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含数据文件。 @@ -29,7 +29,7 @@ mindspore.dataset.LJSpeechDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.LSUNDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.LSUNDataset.rst index 97483aa27a2..49728e09fb0 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.LSUNDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.LSUNDataset.rst @@ -20,7 +20,7 @@ mindspore.dataset.LSUNDataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含数据文件。 @@ -33,7 +33,7 @@ mindspore.dataset.LSUNDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.LibriTTSDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.LibriTTSDataset.rst index fd78691b1e8..36fc84e15cf 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.LibriTTSDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.LibriTTSDataset.rst @@ -23,7 +23,7 @@ mindspore.dataset.LibriTTSDataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` ,下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含任何数据文件。 @@ -36,7 +36,7 @@ mindspore.dataset.LibriTTSDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: - 暂不支持指定 `sampler` 参数为 :class:`mindspore.dataset.PKSampler`。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.ManifestDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.ManifestDataset.rst index 7fc6971cb3a..c60ee4faca7 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.ManifestDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.ManifestDataset.rst @@ -18,7 +18,7 @@ - **decode** (bool, 可选) - 是否对读取的图片进行解码操作。默认值: ``False`` ,不解码。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_files` 路径下不包含任何数据文件。 @@ -32,7 +32,7 @@ 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: - 如果 `decode` 为 ``False`` ,`image` 列返回图像的一维原始字节。否则,将返回 shape 为 :math:`[H,W,C]` 的解码图像。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.MindDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.MindDataset.rst index 2261f5d031d..0bec30c7de9 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.MindDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.MindDataset.rst @@ -23,7 +23,7 @@ - **padded_sample** (dict, 可选) - 指定额外添加到数据集的样本,可用于在分布式训练时补齐分片数据,注意字典的键名需要与 `columns_list` 指定的列名相同。默认值: ``None`` ,不添加样本。需要与 `num_padded` 参数同时使用。 - **num_padded** (int, 可选) - 指定额外添加的数据集样本的数量。在分布式训练时可用于为数据集补齐样本,使得总样本数量可被 `num_shards` 整除。默认值: ``None`` ,不添加样本。需要与 `padded_sample` 参数同时使用。 - **num_samples** (int, 可选) - 指定从数据集中读取的样本数。默认值: ``None`` ,读取所有样本。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **ValueError** - `dataset_files` 参数所指向的文件无效或不存在。 @@ -34,7 +34,7 @@ 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.MnistDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.MnistDataset.rst index bf3988e00af..6fd4cde961e 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.MnistDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.MnistDataset.rst @@ -17,7 +17,7 @@ mindspore.dataset.MnistDataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含数据文件。 @@ -31,7 +31,7 @@ mindspore.dataset.MnistDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.Multi30kDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.Multi30kDataset.rst index 3574ccea4e5..a4134f4a6f3 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.Multi30kDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.Multi30kDataset.rst @@ -25,7 +25,7 @@ mindspore.dataset.Multi30kDataset - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含数据文件。 @@ -39,7 +39,7 @@ mindspore.dataset.Multi30kDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ **关于Multi30k数据集:** diff --git a/docs/api/api_python/dataset/mindspore.dataset.NumpySlicesDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.NumpySlicesDataset.rst index 579c76ed554..a6748d95d3c 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.NumpySlicesDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.NumpySlicesDataset.rst @@ -34,6 +34,6 @@ mindspore.dataset.NumpySlicesDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. include:: mindspore.dataset.api_list_nlp.rst diff --git a/docs/api/api_python/dataset/mindspore.dataset.OBSMindDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.OBSMindDataset.rst index ec145645b09..081484aa836 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.OBSMindDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.OBSMindDataset.rst @@ -37,7 +37,7 @@ 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: - 需要用户提前在云存储上创建同步用的目录,然后通过 `sync_obs_path` 指定。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.OmniglotDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.OmniglotDataset.rst index eecfcd12b03..8083f2e34bd 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.OmniglotDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.OmniglotDataset.rst @@ -19,7 +19,7 @@ - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含数据文件。 @@ -31,7 +31,7 @@ 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.PaddedDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.PaddedDataset.rst index 7f4470cf579..db6a8565cbf 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.PaddedDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.PaddedDataset.rst @@ -15,6 +15,6 @@ mindspore.dataset.PaddedDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. include:: mindspore.dataset.api_list_nlp.rst diff --git a/docs/api/api_python/dataset/mindspore.dataset.PennTreebankDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.PennTreebankDataset.rst index ca7241f2a47..84253845aa0 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.PennTreebankDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.PennTreebankDataset.rst @@ -22,7 +22,7 @@ mindspore.dataset.PennTreebankDataset - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 参数所指向的文件目录不存在或缺少数据集文件。 @@ -32,7 +32,7 @@ mindspore.dataset.PennTreebankDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ **关于PennTreebank数据集:** diff --git a/docs/api/api_python/dataset/mindspore.dataset.PhotoTourDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.PhotoTourDataset.rst index 2e211c84e96..c7e371479e3 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.PhotoTourDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.PhotoTourDataset.rst @@ -22,7 +22,7 @@ mindspore.dataset.PhotoTourDataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含数据文件。 @@ -38,7 +38,7 @@ mindspore.dataset.PhotoTourDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.Places365Dataset.rst b/docs/api/api_python/dataset/mindspore.dataset.Places365Dataset.rst index 6bf90490e93..e3633856e61 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.Places365Dataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.Places365Dataset.rst @@ -19,7 +19,7 @@ mindspore.dataset.Places365Dataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含数据文件。 @@ -33,7 +33,7 @@ mindspore.dataset.Places365Dataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.QMnistDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.QMnistDataset.rst index da022007021..5d818c2da0d 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.QMnistDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.QMnistDataset.rst @@ -17,7 +17,7 @@ mindspore.dataset.QMnistDataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含数据文件。 @@ -30,7 +30,7 @@ mindspore.dataset.QMnistDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.RandomDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.RandomDataset.rst index 166b935ea93..232c571ab88 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.RandomDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.RandomDataset.rst @@ -12,7 +12,7 @@ mindspore.dataset.RandomDataset - **columns_list** (list[str], 可选) - 指定生成数据集的列名。默认值: ``None`` ,生成的数据列将以"c0"、"c1"、"c2" ... "cn"的规则命名。 - **num_samples** (int, 可选) - 指定从数据集中读取的样本数。默认值: ``None`` ,读取所有样本。 - **num_parallel_workers** (int, 可选) - 指定读取数据的工作线程数。默认值: ``None`` ,使用全局默认线程数(8),也可以通过 :func:`mindspore.dataset.config.set_num_parallel_workers` 配置全局线程数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 - **shuffle** (bool, 可选) - 是否混洗数据集。默认值: ``None`` 。下表中会展示不同参数配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 @@ -30,6 +30,6 @@ mindspore.dataset.RandomDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. include:: mindspore.dataset.api_list_nlp.rst diff --git a/docs/api/api_python/dataset/mindspore.dataset.RenderedSST2Dataset.rst b/docs/api/api_python/dataset/mindspore.dataset.RenderedSST2Dataset.rst index 15c045ac31d..c70bc0b1036 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.RenderedSST2Dataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.RenderedSST2Dataset.rst @@ -17,7 +17,7 @@ mindspore.dataset.RenderedSST2Dataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` ,下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含任何数据文件。 @@ -31,7 +31,7 @@ mindspore.dataset.RenderedSST2Dataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.SBDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.SBDataset.rst index 2949c3e3f43..1fcdf8b02ee 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.SBDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.SBDataset.rst @@ -36,7 +36,7 @@ mindspore.dataset.SBDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.SBUDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.SBUDataset.rst index ce4bb13bbd1..9b9106938ed 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.SBUDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.SBUDataset.rst @@ -16,7 +16,7 @@ mindspore.dataset.SBUDataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含任何数据文件。 @@ -29,7 +29,7 @@ mindspore.dataset.SBUDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.SQuADDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.SQuADDataset.rst index 74d6a0ac4b8..c83709de348 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.SQuADDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.SQuADDataset.rst @@ -26,7 +26,7 @@ mindspore.dataset.SQuADDataset - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含数据文件。 @@ -37,7 +37,7 @@ mindspore.dataset.SQuADDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ **关于SQuAD数据集:** diff --git a/docs/api/api_python/dataset/mindspore.dataset.SST2Dataset.rst b/docs/api/api_python/dataset/mindspore.dataset.SST2Dataset.rst index 4fdcbbe0045..6d74806adb2 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.SST2Dataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.SST2Dataset.rst @@ -23,7 +23,7 @@ mindspore.dataset.SST2Dataset - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 参数所指向的文件目录不存在或缺少数据集文件。 @@ -34,7 +34,7 @@ mindspore.dataset.SST2Dataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ **关于SST2数据集:** diff --git a/docs/api/api_python/dataset/mindspore.dataset.STL10Dataset.rst b/docs/api/api_python/dataset/mindspore.dataset.STL10Dataset.rst index ea6a4dd5d91..848c58315cd 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.STL10Dataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.STL10Dataset.rst @@ -18,7 +18,7 @@ mindspore.dataset.STL10Dataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含数据文件。 @@ -32,7 +32,7 @@ mindspore.dataset.STL10Dataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.SUN397Dataset.rst b/docs/api/api_python/dataset/mindspore.dataset.SUN397Dataset.rst index 2f1f3b5d129..10406bf671d 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.SUN397Dataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.SUN397Dataset.rst @@ -16,7 +16,7 @@ mindspore.dataset.SUN397Dataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` ,下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含数据文件。 @@ -29,7 +29,7 @@ mindspore.dataset.SUN397Dataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.SVHNDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.SVHNDataset.rst index 847df7f9259..216f357ba68 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.SVHNDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.SVHNDataset.rst @@ -29,7 +29,7 @@ mindspore.dataset.SVHNDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.SemeionDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.SemeionDataset.rst index 1d3c7d91360..d9941783a44 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.SemeionDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.SemeionDataset.rst @@ -15,7 +15,7 @@ mindspore.dataset.SemeionDataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含任何数据文件。 @@ -28,7 +28,7 @@ mindspore.dataset.SemeionDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.SogouNewsDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.SogouNewsDataset.rst index 633860a7aed..3ca9e93d1d6 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.SogouNewsDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.SogouNewsDataset.rst @@ -22,7 +22,7 @@ mindspore.dataset.SogouNewsDataset - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - **num_parallel_workers** (int, 可选) - 指定读取数据的工作线程数。默认值: ``None`` ,使用全局默认线程数(8),也可以通过 :func:`mindspore.dataset.config.set_num_parallel_workers` 配置全局线程数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 参数所指向的文件目录不存在或缺少数据集文件。 @@ -32,7 +32,7 @@ mindspore.dataset.SogouNewsDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ **关于SogouNew数据集:** diff --git a/docs/api/api_python/dataset/mindspore.dataset.SpeechCommandsDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.SpeechCommandsDataset.rst index f045ba81c00..41824e4e1b1 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.SpeechCommandsDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.SpeechCommandsDataset.rst @@ -18,7 +18,7 @@ mindspore.dataset.SpeechCommandsDataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含任何数据文件。 @@ -31,7 +31,7 @@ mindspore.dataset.SpeechCommandsDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.TFRecordDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.TFRecordDataset.rst index 5e33ab7c016..f1bf6056c0c 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.TFRecordDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.TFRecordDataset.rst @@ -30,7 +30,7 @@ mindspore.dataset.TFRecordDataset - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后,`num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - **shard_equal_rows** (bool, 可选) - 分布式训练时,为所有分片获取等量的数据行数。默认值: ``False`` 。如果 `shard_equal_rows` 为 ``False`` ,则可能会使得每个分片的数据条目不相等,从而导致分布式训练失败。因此当每个TFRecord文件的数据数量不相等时,建议将此参数设置为 ``True`` 。注意,只有当指定了 `num_shards` 时才能指定此参数。当 `compression_type` 非 ``None`` ,且指定了 `num_samples` 或numRows字段(由参数 `schema` 定义)时,`shard_equal_rows` 会被视为 ``True`` 。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 - **compression_type** (str, 可选) - 用于所有文件的压缩类型,必须是 ``“”`` , ``“GZIP”`` 或 ``“ZLIB”`` 。默认值: ``None`` ,即空字符串。 建议在 `compression_type` 为 ``"GZIP"`` 或 ``"ZLIB"`` 时,指定 `num_samples` 或numRows字段(由参数 `schema` 定义)以避免出现为了获取文件大小对同一个文件进行多次解压而导致性能下降的问题。 @@ -46,6 +46,6 @@ mindspore.dataset.TFRecordDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. include:: mindspore.dataset.api_list_nlp.rst diff --git a/docs/api/api_python/dataset/mindspore.dataset.TedliumDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.TedliumDataset.rst index 87fea71c970..2040c4c4a54 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.TedliumDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.TedliumDataset.rst @@ -21,7 +21,7 @@ mindspore.dataset.TedliumDataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含任何数据文件。 @@ -34,7 +34,7 @@ mindspore.dataset.TedliumDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.TextFileDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.TextFileDataset.rst index 4946ce33ada..6bd31236bd8 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.TextFileDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.TextFileDataset.rst @@ -18,7 +18,7 @@ - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **ValueError** - `dataset_files` 参数所指向的文件无效或不存在。 @@ -29,6 +29,6 @@ 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. include:: mindspore.dataset.api_list_nlp.rst diff --git a/docs/api/api_python/dataset/mindspore.dataset.UDPOSDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.UDPOSDataset.rst index f5244bbb9a5..05940d0f855 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.UDPOSDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.UDPOSDataset.rst @@ -22,7 +22,7 @@ mindspore.dataset.UDPOSDataset - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - **num_parallel_workers** (int, 可选) - 指定读取数据的工作线程数。默认值: ``None`` ,使用全局默认线程数(8),也可以通过 :func:`mindspore.dataset.config.set_num_parallel_workers` 配置全局线程数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 参数所指向的文件目录不存在或缺少数据集文件。 @@ -32,7 +32,7 @@ mindspore.dataset.UDPOSDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ **关于UDPOS数据集:** diff --git a/docs/api/api_python/dataset/mindspore.dataset.USPSDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.USPSDataset.rst index f3fe8bae162..1b89bab1e1f 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.USPSDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.USPSDataset.rst @@ -22,7 +22,7 @@ mindspore.dataset.USPSDataset - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含数据文件。 @@ -34,7 +34,7 @@ mindspore.dataset.USPSDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ **关于USPS数据集:** diff --git a/docs/api/api_python/dataset/mindspore.dataset.VOCDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.VOCDataset.rst index edad366ddd6..3c2373bbea7 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.VOCDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.VOCDataset.rst @@ -27,7 +27,7 @@ mindspore.dataset.VOCDataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 - **extra_metadata** (bool, 可选) - 用于指定是否额外输出一个数据列用于表示图片元信息。如果为 ``True`` ,则将额外输出一个名为 `[_meta-filename, dtype=string]` 的数据列。默认值: ``False`` 。 - **decrypt** (callable, 可选) - 图像解密函数,接受加密的图片路径并返回bytes类型的解密数据。默认值: ``None`` ,不进行解密。 @@ -48,7 +48,7 @@ mindspore.dataset.VOCDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: - 当参数 `extra_metadata` 为True时,还需使用 `rename` 操作删除额外数据列 '_meta-filename'的前缀 '_meta-', diff --git a/docs/api/api_python/dataset/mindspore.dataset.WIDERFaceDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.WIDERFaceDataset.rst index 7fb0c54b1e0..36f4bf41310 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.WIDERFaceDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.WIDERFaceDataset.rst @@ -19,7 +19,7 @@ mindspore.dataset.WIDERFaceDataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 不包含任何数据文件。 @@ -35,7 +35,7 @@ mindspore.dataset.WIDERFaceDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.WaitedDSCallback.rst b/docs/api/api_python/dataset/mindspore.dataset.WaitedDSCallback.rst index 1a5dcccd2bd..4ecf2137501 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.WaitedDSCallback.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.WaitedDSCallback.rst @@ -3,11 +3,11 @@ mindspore.dataset.WaitedDSCallback .. py:class:: mindspore.dataset.WaitedDSCallback(step_size=1) - 阻塞式数据处理回调类的抽象基类,用于与训练回调类 `mindspore.train.Callback `_ 的同步。 + 阻塞式数据处理回调类的抽象基类,用于与训练回调类 `mindspore.train.Callback `_ 的同步。 可用于在step或epoch开始前执行自定义的回调方法,例如在自动数据增强中根据上一个epoch的loss值来更新增强操作参数配置。 - 用户可通过 `train_run_context` 获取网络训练相关信息,如 `network` 、 `train_network` 、 `epoch_num` 、 `batch_num` 、 `loss_fn` 、 `optimizer` 、 `parallel_mode` 、 `device_number` 、 `list_callback` 、 `cur_epoch_num` 、 `cur_step_num` 、 `dataset_sink_mode` 、 `net_outputs` 等,详见 `mindspore.train.Callback `_ 。 + 用户可通过 `train_run_context` 获取网络训练相关信息,如 `network` 、 `train_network` 、 `epoch_num` 、 `batch_num` 、 `loss_fn` 、 `optimizer` 、 `parallel_mode` 、 `device_number` 、 `list_callback` 、 `cur_epoch_num` 、 `cur_step_num` 、 `dataset_sink_mode` 、 `net_outputs` 等,详见 `mindspore.train.Callback `_ 。 用户可通过 `ds_run_context` 获取数据处理管道相关信息,包括 `cur_epoch_num` (当前epoch数)、 `cur_step_num_in_epoch` (当前epoch的step数)、 `cur_step_num` (当前step数)。 diff --git a/docs/api/api_python/dataset/mindspore.dataset.WikiTextDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.WikiTextDataset.rst index 3117e67e154..6a8a16b1620 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.WikiTextDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.WikiTextDataset.rst @@ -21,7 +21,7 @@ mindspore.dataset.WikiTextDataset - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 参数所指向的文件目录不存在或缺少数据集文件。 @@ -33,7 +33,7 @@ mindspore.dataset.WikiTextDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ **关于WikiText数据集:** diff --git a/docs/api/api_python/dataset/mindspore.dataset.YahooAnswersDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.YahooAnswersDataset.rst index 5e4eda92fdf..48a2966630e 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.YahooAnswersDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.YahooAnswersDataset.rst @@ -22,7 +22,7 @@ mindspore.dataset.YahooAnswersDataset - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 参数所指向的文件目录不存在或缺少数据集文件。 @@ -33,7 +33,7 @@ mindspore.dataset.YahooAnswersDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ **关于YahooAnswers数据集:** diff --git a/docs/api/api_python/dataset/mindspore.dataset.YelpReviewDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.YelpReviewDataset.rst index f10cb8e5352..f544284eba4 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.YelpReviewDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.YelpReviewDataset.rst @@ -23,7 +23,7 @@ mindspore.dataset.YelpReviewDataset - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - **num_parallel_workers** (int, 可选) - 指定读取数据的工作线程数。默认值: ``None`` ,使用全局默认线程数(8),也可以通过 :func:`mindspore.dataset.config.set_num_parallel_workers` 配置全局线程数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 参数所指向的文件目录不存在或缺少数据集文件。 @@ -33,7 +33,7 @@ mindspore.dataset.YelpReviewDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ **关于YelpReview数据集:** diff --git a/docs/api/api_python/dataset/mindspore.dataset.YesNoDataset.rst b/docs/api/api_python/dataset/mindspore.dataset.YesNoDataset.rst index 08a5a33f7c1..eb69a6a6d52 100644 --- a/docs/api/api_python/dataset/mindspore.dataset.YesNoDataset.rst +++ b/docs/api/api_python/dataset/mindspore.dataset.YesNoDataset.rst @@ -16,7 +16,7 @@ mindspore.dataset.YesNoDataset - **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器。默认值: ``None`` 。下表中会展示不同配置的预期行为。 - **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数。默认值: ``None`` 。指定此参数后, `num_samples` 表示每个分片的最大样本数。 - **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号。默认值: ``None`` 。只有当指定了 `num_shards` 时才能指定此参数。 - - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 + - **cache** (:class:`~.dataset.DatasetCache`, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 `_ 。默认值: ``None`` ,不使用缓存。 异常: - **RuntimeError** - `dataset_dir` 路径下不包含任何数据文件。 @@ -29,7 +29,7 @@ mindspore.dataset.YesNoDataset 教程样例: - `使用数据Pipeline加载 & 处理数据集 - `_ + `_ .. note:: 入参 `num_samples` 、 `shuffle` 、 `num_shards` 、 `shard_id` 可用于控制数据集所使用的采样器,其与入参 `sampler` 搭配使用的效果如下。 diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.AllpassBiquad.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.AllpassBiquad.rst index 93f99381ed6..37f81abae0c 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.AllpassBiquad.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.AllpassBiquad.rst @@ -29,4 +29,4 @@ mindspore.dataset.audio.AllpassBiquad 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.AmplitudeToDB.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.AmplitudeToDB.rst index f0d4436bd70..72c464fdff9 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.AmplitudeToDB.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.AmplitudeToDB.rst @@ -27,4 +27,4 @@ mindspore.dataset.audio.AmplitudeToDB 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Angle.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Angle.rst index 1e16520b98e..239c87cc29a 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Angle.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Angle.rst @@ -12,4 +12,4 @@ mindspore.dataset.audio.Angle 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.BandBiquad.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.BandBiquad.rst index 2163d42a153..8738420fca4 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.BandBiquad.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.BandBiquad.rst @@ -28,4 +28,4 @@ mindspore.dataset.audio.BandBiquad 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.BandpassBiquad.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.BandpassBiquad.rst index b1f6a7f0f42..4dd203162be 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.BandpassBiquad.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.BandpassBiquad.rst @@ -36,4 +36,4 @@ mindspore.dataset.audio.BandpassBiquad 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.BandrejectBiquad.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.BandrejectBiquad.rst index 8f4c3b8b184..2913002fdce 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.BandrejectBiquad.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.BandrejectBiquad.rst @@ -31,4 +31,4 @@ mindspore.dataset.audio.BandrejectBiquad 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.BassBiquad.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.BassBiquad.rst index 449a0d66e89..fd4bde3eb2f 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.BassBiquad.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.BassBiquad.rst @@ -31,4 +31,4 @@ mindspore.dataset.audio.BassBiquad 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Biquad.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Biquad.rst index c7a76956d5f..f550b49f335 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Biquad.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Biquad.rst @@ -26,4 +26,4 @@ mindspore.dataset.audio.Biquad 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.ComplexNorm.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.ComplexNorm.rst index e2093d286aa..e5542d8c3e1 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.ComplexNorm.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.ComplexNorm.rst @@ -17,4 +17,4 @@ mindspore.dataset.audio.ComplexNorm 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.ComputeDeltas.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.ComputeDeltas.rst index 8a116431ee2..16cada67423 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.ComputeDeltas.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.ComputeDeltas.rst @@ -33,4 +33,4 @@ mindspore.dataset.audio.ComputeDeltas 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Contrast.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Contrast.rst index aa26136c209..dd1bb06bf6f 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Contrast.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Contrast.rst @@ -21,4 +21,4 @@ mindspore.dataset.audio.Contrast 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.DBToAmplitude.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.DBToAmplitude.rst index b7bd8579f11..a4927207162 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.DBToAmplitude.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.DBToAmplitude.rst @@ -15,4 +15,4 @@ mindspore.dataset.audio.DBToAmplitude 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.DCShift.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.DCShift.rst index 16ec11d0cdd..f7ae4dd9636 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.DCShift.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.DCShift.rst @@ -16,4 +16,4 @@ mindspore.dataset.audio.DCShift 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.DeemphBiquad.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.DeemphBiquad.rst index 6d48df9e866..8091f30f786 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.DeemphBiquad.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.DeemphBiquad.rst @@ -17,4 +17,4 @@ mindspore.dataset.audio.DeemphBiquad 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.DetectPitchFrequency.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.DetectPitchFrequency.rst index 7e6c06fea60..78b46fb1e27 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.DetectPitchFrequency.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.DetectPitchFrequency.rst @@ -27,4 +27,4 @@ mindspore.dataset.audio.DetectPitchFrequency 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Dither.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Dither.rst index b44495e937e..693066bc104 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Dither.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Dither.rst @@ -18,4 +18,4 @@ mindspore.dataset.audio.Dither 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.EqualizerBiquad.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.EqualizerBiquad.rst index 3f3ac5ca71a..b51ad559863 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.EqualizerBiquad.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.EqualizerBiquad.rst @@ -23,4 +23,4 @@ mindspore.dataset.audio.EqualizerBiquad 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Fade.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Fade.rst index c6ba1d718e2..7bf512cf583 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Fade.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Fade.rst @@ -22,4 +22,4 @@ mindspore.dataset.audio.Fade 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Filtfilt.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Filtfilt.rst index 6fde0b00cd8..e60f2eed543 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Filtfilt.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Filtfilt.rst @@ -21,4 +21,4 @@ mindspore.dataset.audio.Filtfilt 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Flanger.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Flanger.rst index ce098df10ea..0eb3442565e 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Flanger.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Flanger.rst @@ -41,4 +41,4 @@ mindspore.dataset.audio.Flanger 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.FrequencyMasking.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.FrequencyMasking.rst index fd17c4a7fcd..6cb013d189f 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.FrequencyMasking.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.FrequencyMasking.rst @@ -25,7 +25,7 @@ mindspore.dataset.audio.FrequencyMasking 教程样例: - `音频变换样例库 - `_ + `_ .. image:: frequency_masking_original.png diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Gain.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Gain.rst index 473a81116b9..8d858b51fff 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Gain.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Gain.rst @@ -13,4 +13,4 @@ mindspore.dataset.audio.Gain 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.GriffinLim.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.GriffinLim.rst index ff7c7313963..1caa092baa9 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.GriffinLim.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.GriffinLim.rst @@ -43,4 +43,4 @@ mindspore.dataset.audio.GriffinLim 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.HighpassBiquad.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.HighpassBiquad.rst index fa7388aaaba..cc2e876784c 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.HighpassBiquad.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.HighpassBiquad.rst @@ -22,4 +22,4 @@ mindspore.dataset.audio.HighpassBiquad 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.InverseMelScale.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.InverseMelScale.rst index b007d42f4cb..97ea853e0bd 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.InverseMelScale.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.InverseMelScale.rst @@ -41,4 +41,4 @@ mindspore.dataset.audio.InverseMelScale 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.InverseSpectrogram.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.InverseSpectrogram.rst index 670b7329bc6..cd8f4614bd1 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.InverseSpectrogram.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.InverseSpectrogram.rst @@ -37,4 +37,4 @@ mindspore.dataset.audio.InverseSpectrogram 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.LFCC.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.LFCC.rst index 53d68007b48..ff9cc8aeb34 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.LFCC.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.LFCC.rst @@ -44,4 +44,4 @@ mindspore.dataset.audio.LFCC 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.LFilter.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.LFilter.rst index 7b5f55e74a8..ffd18c53594 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.LFilter.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.LFilter.rst @@ -21,4 +21,4 @@ mindspore.dataset.audio.LFilter 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.LowpassBiquad.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.LowpassBiquad.rst index a64a1174274..b31137a4fb0 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.LowpassBiquad.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.LowpassBiquad.rst @@ -29,4 +29,4 @@ mindspore.dataset.audio.LowpassBiquad 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MFCC.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MFCC.rst index bc4603770f4..0cb0e786faa 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MFCC.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MFCC.rst @@ -40,4 +40,4 @@ mindspore.dataset.audio.MFCC 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Magphase.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Magphase.rst index f732238be0e..a9a8af59292 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Magphase.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Magphase.rst @@ -13,4 +13,4 @@ mindspore.dataset.audio.Magphase 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MaskAlongAxis.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MaskAlongAxis.rst index d2afe8c519f..b072d07e416 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MaskAlongAxis.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MaskAlongAxis.rst @@ -18,4 +18,4 @@ mindspore.dataset.audio.MaskAlongAxis 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MaskAlongAxisIID.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MaskAlongAxisIID.rst index 0fcf122f240..fd2afbe2eef 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MaskAlongAxisIID.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MaskAlongAxisIID.rst @@ -21,4 +21,4 @@ mindspore.dataset.audio.MaskAlongAxisIID 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MelScale.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MelScale.rst index 6d0f60fcee3..21af17f2b42 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MelScale.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MelScale.rst @@ -31,4 +31,4 @@ mindspore.dataset.audio.MelScale 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MelSpectrogram.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MelSpectrogram.rst index 85f8cabb031..ef575cf3a28 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MelSpectrogram.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MelSpectrogram.rst @@ -53,4 +53,4 @@ mindspore.dataset.audio.MelSpectrogram 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MuLawDecoding.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MuLawDecoding.rst index 880f0bfa862..d4b84202b1e 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MuLawDecoding.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MuLawDecoding.rst @@ -15,4 +15,4 @@ mindspore.dataset.audio.MuLawDecoding 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MuLawEncoding.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MuLawEncoding.rst index 94bce58f8df..51f29442d44 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MuLawEncoding.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.MuLawEncoding.rst @@ -14,4 +14,4 @@ mindspore.dataset.audio.MuLawEncoding 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Overdrive.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Overdrive.rst index 710c540472c..be3c7b8b56d 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Overdrive.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Overdrive.rst @@ -20,4 +20,4 @@ mindspore.dataset.audio.Overdrive 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.PhaseVocoder.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.PhaseVocoder.rst index d2ff7a4e90f..72ed970fbf4 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.PhaseVocoder.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.PhaseVocoder.rst @@ -17,4 +17,4 @@ mindspore.dataset.audio.PhaseVocoder 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Phaser.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Phaser.rst index a96b0d22fb2..97b8ce36e19 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Phaser.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Phaser.rst @@ -34,4 +34,4 @@ mindspore.dataset.audio.Phaser 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.PitchShift.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.PitchShift.rst index 827d488d12d..6b019f35e2e 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.PitchShift.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.PitchShift.rst @@ -30,4 +30,4 @@ mindspore.dataset.audio.PitchShift 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Resample.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Resample.rst index d2f44411490..7d6d18a31c3 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Resample.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Resample.rst @@ -29,4 +29,4 @@ mindspore.dataset.audio.Resample 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.RiaaBiquad.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.RiaaBiquad.rst index c2da0e468ef..c1c69f0f4bc 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.RiaaBiquad.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.RiaaBiquad.rst @@ -16,4 +16,4 @@ mindspore.dataset.audio.RiaaBiquad 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.SlidingWindowCmn.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.SlidingWindowCmn.rst index 70867248e48..b0ddc93e964 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.SlidingWindowCmn.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.SlidingWindowCmn.rst @@ -22,4 +22,4 @@ mindspore.dataset.audio.SlidingWindowCmn 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.SpectralCentroid.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.SpectralCentroid.rst index 4caef16e296..958bf89ae01 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.SpectralCentroid.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.SpectralCentroid.rst @@ -31,4 +31,4 @@ mindspore.dataset.audio.SpectralCentroid 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Spectrogram.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Spectrogram.rst index 86449f6ca83..e87cf07523c 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Spectrogram.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Spectrogram.rst @@ -40,4 +40,4 @@ mindspore.dataset.audio.Spectrogram 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.TimeMasking.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.TimeMasking.rst index 9f5be4caf4b..7ad57a71bf0 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.TimeMasking.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.TimeMasking.rst @@ -25,7 +25,7 @@ mindspore.dataset.audio.TimeMasking 教程样例: - `音频变换样例库 - `_ + `_ .. image:: time_masking_original.png diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.TimeStretch.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.TimeStretch.rst index 2d7757c6e4a..9a705503bca 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.TimeStretch.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.TimeStretch.rst @@ -23,7 +23,7 @@ mindspore.dataset.audio.TimeStretch 教程样例: - `音频变换样例库 - `_ + `_ .. image:: time_stretch_rate1.5.png diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.TrebleBiquad.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.TrebleBiquad.rst index 65dacce6f01..a89c17c33e7 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.TrebleBiquad.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.TrebleBiquad.rst @@ -24,4 +24,4 @@ mindspore.dataset.audio.TrebleBiquad 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Vad.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Vad.rst index 703c207ba5f..4246ec09328 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Vad.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Vad.rst @@ -67,4 +67,4 @@ mindspore.dataset.audio.Vad 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Vol.rst b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Vol.rst index 6b1cc3b13f4..fd7bbab0c68 100644 --- a/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Vol.rst +++ b/docs/api/api_python/dataset_audio/mindspore.dataset.audio.Vol.rst @@ -22,4 +22,4 @@ mindspore.dataset.audio.Vol 教程样例: - `音频变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_text/mindspore.dataset.text.AddToken.rst b/docs/api/api_python/dataset_text/mindspore.dataset.text.AddToken.rst index 7ae46e28ab0..411fefcfacc 100644 --- a/docs/api/api_python/dataset_text/mindspore.dataset.text.AddToken.rst +++ b/docs/api/api_python/dataset_text/mindspore.dataset.text.AddToken.rst @@ -15,4 +15,4 @@ mindspore.dataset.text.AddToken 教程样例: - `文本变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_text/mindspore.dataset.text.BasicTokenizer.rst b/docs/api/api_python/dataset_text/mindspore.dataset.text.BasicTokenizer.rst index 46fcf1d5ce1..d60f06fea94 100644 --- a/docs/api/api_python/dataset_text/mindspore.dataset.text.BasicTokenizer.rst +++ b/docs/api/api_python/dataset_text/mindspore.dataset.text.BasicTokenizer.rst @@ -25,4 +25,4 @@ 教程样例: - `文本变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_text/mindspore.dataset.text.BertTokenizer.rst b/docs/api/api_python/dataset_text/mindspore.dataset.text.BertTokenizer.rst index 33085945b31..2d8f5a45d3b 100644 --- a/docs/api/api_python/dataset_text/mindspore.dataset.text.BertTokenizer.rst +++ b/docs/api/api_python/dataset_text/mindspore.dataset.text.BertTokenizer.rst @@ -33,4 +33,4 @@ mindspore.dataset.text.BertTokenizer 教程样例: - `文本变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_text/mindspore.dataset.text.CaseFold.rst b/docs/api/api_python/dataset_text/mindspore.dataset.text.CaseFold.rst index e5ab532d299..35c060ef051 100644 --- a/docs/api/api_python/dataset_text/mindspore.dataset.text.CaseFold.rst +++ b/docs/api/api_python/dataset_text/mindspore.dataset.text.CaseFold.rst @@ -11,4 +11,4 @@ mindspore.dataset.text.CaseFold 教程样例: - `文本变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_text/mindspore.dataset.text.FilterWikipediaXML.rst b/docs/api/api_python/dataset_text/mindspore.dataset.text.FilterWikipediaXML.rst index 234e8fa55d4..e842f3a573d 100644 --- a/docs/api/api_python/dataset_text/mindspore.dataset.text.FilterWikipediaXML.rst +++ b/docs/api/api_python/dataset_text/mindspore.dataset.text.FilterWikipediaXML.rst @@ -9,4 +9,4 @@ mindspore.dataset.text.FilterWikipediaXML 教程样例: - `文本变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_text/mindspore.dataset.text.JiebaTokenizer.rst b/docs/api/api_python/dataset_text/mindspore.dataset.text.JiebaTokenizer.rst index d86257f3b58..1bac10f85c5 100644 --- a/docs/api/api_python/dataset_text/mindspore.dataset.text.JiebaTokenizer.rst +++ b/docs/api/api_python/dataset_text/mindspore.dataset.text.JiebaTokenizer.rst @@ -24,7 +24,7 @@ mindspore.dataset.text.JiebaTokenizer 教程样例: - `文本变换样例库 - `_ + `_ .. py:method:: add_dict(user_dict) diff --git a/docs/api/api_python/dataset_text/mindspore.dataset.text.Lookup.rst b/docs/api/api_python/dataset_text/mindspore.dataset.text.Lookup.rst index 2cc909cc059..55ed743f72d 100644 --- a/docs/api/api_python/dataset_text/mindspore.dataset.text.Lookup.rst +++ b/docs/api/api_python/dataset_text/mindspore.dataset.text.Lookup.rst @@ -19,4 +19,4 @@ mindspore.dataset.text.Lookup 教程样例: - `文本变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_text/mindspore.dataset.text.Ngram.rst b/docs/api/api_python/dataset_text/mindspore.dataset.text.Ngram.rst index 89ca77abe9c..087bd3605e5 100644 --- a/docs/api/api_python/dataset_text/mindspore.dataset.text.Ngram.rst +++ b/docs/api/api_python/dataset_text/mindspore.dataset.text.Ngram.rst @@ -26,4 +26,4 @@ mindspore.dataset.text.Ngram 教程样例: - `文本变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_text/mindspore.dataset.text.NormalizeUTF8.rst b/docs/api/api_python/dataset_text/mindspore.dataset.text.NormalizeUTF8.rst index 2ae17f188a1..2087db3d594 100644 --- a/docs/api/api_python/dataset_text/mindspore.dataset.text.NormalizeUTF8.rst +++ b/docs/api/api_python/dataset_text/mindspore.dataset.text.NormalizeUTF8.rst @@ -17,4 +17,4 @@ mindspore.dataset.text.NormalizeUTF8 教程样例: - `文本变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_text/mindspore.dataset.text.PythonTokenizer.rst b/docs/api/api_python/dataset_text/mindspore.dataset.text.PythonTokenizer.rst index 01ad5dffbe5..1954354c5b6 100644 --- a/docs/api/api_python/dataset_text/mindspore.dataset.text.PythonTokenizer.rst +++ b/docs/api/api_python/dataset_text/mindspore.dataset.text.PythonTokenizer.rst @@ -13,4 +13,4 @@ mindspore.dataset.text.PythonTokenizer 教程样例: - `文本变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_text/mindspore.dataset.text.RegexReplace.rst b/docs/api/api_python/dataset_text/mindspore.dataset.text.RegexReplace.rst index 6b59ecd5bfa..8968f3013b0 100644 --- a/docs/api/api_python/dataset_text/mindspore.dataset.text.RegexReplace.rst +++ b/docs/api/api_python/dataset_text/mindspore.dataset.text.RegexReplace.rst @@ -20,4 +20,4 @@ mindspore.dataset.text.RegexReplace 教程样例: - `文本变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_text/mindspore.dataset.text.RegexTokenizer.rst b/docs/api/api_python/dataset_text/mindspore.dataset.text.RegexTokenizer.rst index 20066d75cc6..f7900eba5c4 100644 --- a/docs/api/api_python/dataset_text/mindspore.dataset.text.RegexTokenizer.rst +++ b/docs/api/api_python/dataset_text/mindspore.dataset.text.RegexTokenizer.rst @@ -22,4 +22,4 @@ mindspore.dataset.text.RegexTokenizer 教程样例: - `文本变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_text/mindspore.dataset.text.SentencePieceTokenizer.rst b/docs/api/api_python/dataset_text/mindspore.dataset.text.SentencePieceTokenizer.rst index df624967991..d9cec16e74c 100644 --- a/docs/api/api_python/dataset_text/mindspore.dataset.text.SentencePieceTokenizer.rst +++ b/docs/api/api_python/dataset_text/mindspore.dataset.text.SentencePieceTokenizer.rst @@ -20,4 +20,4 @@ mindspore.dataset.text.SentencePieceTokenizer 教程样例: - `文本变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_text/mindspore.dataset.text.SlidingWindow.rst b/docs/api/api_python/dataset_text/mindspore.dataset.text.SlidingWindow.rst index 3d6a43a74d7..8ac4ccb702c 100644 --- a/docs/api/api_python/dataset_text/mindspore.dataset.text.SlidingWindow.rst +++ b/docs/api/api_python/dataset_text/mindspore.dataset.text.SlidingWindow.rst @@ -16,4 +16,4 @@ mindspore.dataset.text.SlidingWindow 教程样例: - `文本变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_text/mindspore.dataset.text.ToNumber.rst b/docs/api/api_python/dataset_text/mindspore.dataset.text.ToNumber.rst index b926d525d29..3da584903b3 100644 --- a/docs/api/api_python/dataset_text/mindspore.dataset.text.ToNumber.rst +++ b/docs/api/api_python/dataset_text/mindspore.dataset.text.ToNumber.rst @@ -18,4 +18,4 @@ mindspore.dataset.text.ToNumber 教程样例: - `文本变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_text/mindspore.dataset.text.ToVectors.rst b/docs/api/api_python/dataset_text/mindspore.dataset.text.ToVectors.rst index 2d1c8cd7662..98d1231299a 100644 --- a/docs/api/api_python/dataset_text/mindspore.dataset.text.ToVectors.rst +++ b/docs/api/api_python/dataset_text/mindspore.dataset.text.ToVectors.rst @@ -18,4 +18,4 @@ mindspore.dataset.text.ToVectors 教程样例: - `文本变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_text/mindspore.dataset.text.Truncate.rst b/docs/api/api_python/dataset_text/mindspore.dataset.text.Truncate.rst index c849f595441..daffb9f118a 100644 --- a/docs/api/api_python/dataset_text/mindspore.dataset.text.Truncate.rst +++ b/docs/api/api_python/dataset_text/mindspore.dataset.text.Truncate.rst @@ -15,4 +15,4 @@ mindspore.dataset.text.Truncate 教程样例: - `文本变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_text/mindspore.dataset.text.TruncateSequencePair.rst b/docs/api/api_python/dataset_text/mindspore.dataset.text.TruncateSequencePair.rst index 6a30f5f8f74..34f96e25d23 100644 --- a/docs/api/api_python/dataset_text/mindspore.dataset.text.TruncateSequencePair.rst +++ b/docs/api/api_python/dataset_text/mindspore.dataset.text.TruncateSequencePair.rst @@ -14,4 +14,4 @@ mindspore.dataset.text.TruncateSequencePair 教程样例: - `文本变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_text/mindspore.dataset.text.UnicodeCharTokenizer.rst b/docs/api/api_python/dataset_text/mindspore.dataset.text.UnicodeCharTokenizer.rst index 565b4f3ceb4..31d87de83c2 100644 --- a/docs/api/api_python/dataset_text/mindspore.dataset.text.UnicodeCharTokenizer.rst +++ b/docs/api/api_python/dataset_text/mindspore.dataset.text.UnicodeCharTokenizer.rst @@ -13,4 +13,4 @@ mindspore.dataset.text.UnicodeCharTokenizer 教程样例: - `文本变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_text/mindspore.dataset.text.UnicodeScriptTokenizer.rst b/docs/api/api_python/dataset_text/mindspore.dataset.text.UnicodeScriptTokenizer.rst index d6eb9e42593..619aaf4f7a6 100644 --- a/docs/api/api_python/dataset_text/mindspore.dataset.text.UnicodeScriptTokenizer.rst +++ b/docs/api/api_python/dataset_text/mindspore.dataset.text.UnicodeScriptTokenizer.rst @@ -17,4 +17,4 @@ mindspore.dataset.text.UnicodeScriptTokenizer 教程样例: - `文本变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_text/mindspore.dataset.text.WhitespaceTokenizer.rst b/docs/api/api_python/dataset_text/mindspore.dataset.text.WhitespaceTokenizer.rst index b1c865731f8..2ac593ff3de 100644 --- a/docs/api/api_python/dataset_text/mindspore.dataset.text.WhitespaceTokenizer.rst +++ b/docs/api/api_python/dataset_text/mindspore.dataset.text.WhitespaceTokenizer.rst @@ -15,4 +15,4 @@ mindspore.dataset.text.WhitespaceTokenizer 教程样例: - `文本变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_text/mindspore.dataset.text.WordpieceTokenizer.rst b/docs/api/api_python/dataset_text/mindspore.dataset.text.WordpieceTokenizer.rst index ba071959cc6..d9a4d33480d 100644 --- a/docs/api/api_python/dataset_text/mindspore.dataset.text.WordpieceTokenizer.rst +++ b/docs/api/api_python/dataset_text/mindspore.dataset.text.WordpieceTokenizer.rst @@ -22,4 +22,4 @@ mindspore.dataset.text.WordpieceTokenizer 教程样例: - `文本变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AdjustBrightness.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AdjustBrightness.rst index f98f315d086..ee7dc20661b 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AdjustBrightness.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AdjustBrightness.rst @@ -18,7 +18,7 @@ mindspore.dataset.vision.AdjustBrightness 教程样例: - `视觉变换样例库 - `_ + `_ .. py:method:: device(device_target="CPU") diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AdjustContrast.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AdjustContrast.rst index 0439fe76163..fe265e733b1 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AdjustContrast.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AdjustContrast.rst @@ -18,7 +18,7 @@ mindspore.dataset.vision.AdjustContrast 教程样例: - `视觉变换样例库 - `_ + `_ .. py:method:: device(device_target="CPU") diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AdjustGamma.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AdjustGamma.rst index dcf4386a443..64870c51667 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AdjustGamma.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AdjustGamma.rst @@ -22,4 +22,4 @@ mindspore.dataset.vision.AdjustGamma 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AdjustHue.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AdjustHue.rst index a570581dc33..4ba6220ddde 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AdjustHue.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AdjustHue.rst @@ -17,7 +17,7 @@ mindspore.dataset.vision.AdjustHue 教程样例: - `视觉变换样例库 - `_ + `_ .. py:method:: device(device_target="CPU") diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AdjustSaturation.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AdjustSaturation.rst index 2043874f343..66a90f83b97 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AdjustSaturation.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AdjustSaturation.rst @@ -19,7 +19,7 @@ mindspore.dataset.vision.AdjustSaturation 教程样例: - `视觉变换样例库 - `_ + `_ .. py:method:: device(device_target="CPU") diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AdjustSharpness.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AdjustSharpness.rst index 64ac367cd9a..b7fb33d906c 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AdjustSharpness.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AdjustSharpness.rst @@ -16,4 +16,4 @@ mindspore.dataset.vision.AdjustSharpness 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Affine.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Affine.rst index c51ea6e44e9..1a7039e6401 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Affine.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Affine.rst @@ -30,7 +30,7 @@ mindspore.dataset.vision.Affine 教程样例: - `视觉变换样例库 - `_ + `_ .. py:method:: device(device_target="CPU") diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AutoAugment.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AutoAugment.rst index 003d6d80266..90dd24befd9 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AutoAugment.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AutoAugment.rst @@ -28,4 +28,4 @@ mindspore.dataset.vision.AutoAugment 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AutoContrast.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AutoContrast.rst index 12d69997c1d..d84a3c9bb52 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AutoContrast.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.AutoContrast.rst @@ -18,4 +18,4 @@ mindspore.dataset.vision.AutoContrast 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.BoundingBoxAugment.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.BoundingBoxAugment.rst index db740ffa859..0263f467691 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.BoundingBoxAugment.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.BoundingBoxAugment.rst @@ -17,4 +17,4 @@ mindspore.dataset.vision.BoundingBoxAugment 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.CenterCrop.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.CenterCrop.rst index 514382cb17a..b83107b5491 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.CenterCrop.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.CenterCrop.rst @@ -18,4 +18,4 @@ mindspore.dataset.vision.CenterCrop 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ConvertColor.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ConvertColor.rst index a352ff43959..6dfd444c0e8 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ConvertColor.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ConvertColor.rst @@ -35,4 +35,4 @@ mindspore.dataset.vision.ConvertColor 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Crop.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Crop.rst index fc63b62c354..7103e2a7b19 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Crop.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Crop.rst @@ -23,7 +23,7 @@ mindspore.dataset.vision.Crop 教程样例: - `视觉变换样例库 - `_ + `_ .. py:method:: device(device_target="CPU") diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.CutMixBatch.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.CutMixBatch.rst index 96286d473b1..ce8ab817b9b 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.CutMixBatch.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.CutMixBatch.rst @@ -21,4 +21,4 @@ mindspore.dataset.vision.CutMixBatch 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.CutOut.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.CutOut.rst index cbc22b8ccc3..fb4b28ec2ab 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.CutOut.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.CutOut.rst @@ -20,4 +20,4 @@ mindspore.dataset.vision.CutOut 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Decode.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Decode.rst index 366ee51b0b0..5bec886beba 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Decode.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Decode.rst @@ -17,7 +17,7 @@ mindspore.dataset.vision.Decode 教程样例: - `视觉变换样例库 - `_ + `_ .. py:method:: device(device_target="CPU") diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Equalize.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Equalize.rst index 606c84962a6..b515a18aefb 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Equalize.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Equalize.rst @@ -10,4 +10,4 @@ mindspore.dataset.vision.Equalize 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Erase.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Erase.rst index d92d101a67c..5d0646170ea 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Erase.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Erase.rst @@ -31,4 +31,4 @@ mindspore.dataset.vision.Erase 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.FiveCrop.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.FiveCrop.rst index 4d26cd6e25e..c8eef041a26 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.FiveCrop.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.FiveCrop.rst @@ -14,4 +14,4 @@ mindspore.dataset.vision.FiveCrop 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.GaussianBlur.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.GaussianBlur.rst index 9b7c7ff2d28..2fb3f23a0cf 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.GaussianBlur.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.GaussianBlur.rst @@ -25,7 +25,7 @@ mindspore.dataset.vision.GaussianBlur 教程样例: - `视觉变换样例库 - `_ + `_ .. py:method:: device(device_target="CPU") diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Grayscale.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Grayscale.rst index e5fa9e20e7c..3e765928c0b 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Grayscale.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Grayscale.rst @@ -14,4 +14,4 @@ mindspore.dataset.vision.Grayscale 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.HWC2CHW.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.HWC2CHW.rst index 6a3fbf698c2..5f14b2f62ff 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.HWC2CHW.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.HWC2CHW.rst @@ -13,4 +13,4 @@ mindspore.dataset.vision.HWC2CHW 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.HorizontalFlip.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.HorizontalFlip.rst index d2133f9c052..38084e14a0c 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.HorizontalFlip.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.HorizontalFlip.rst @@ -12,7 +12,7 @@ mindspore.dataset.vision.HorizontalFlip 教程样例: - `视觉变换样例库 - `_ + `_ .. py:method:: device(device_target="CPU") diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.HsvToRgb.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.HsvToRgb.rst index 1d8a29dfa86..bd6574e02ea 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.HsvToRgb.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.HsvToRgb.rst @@ -13,4 +13,4 @@ mindspore.dataset.vision.HsvToRgb 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Invert.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Invert.rst index a288daccc13..579b1ef9e22 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Invert.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Invert.rst @@ -12,4 +12,4 @@ mindspore.dataset.vision.Invert 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.LinearTransformation.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.LinearTransformation.rst index 80acd40028c..0cd2bca29c9 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.LinearTransformation.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.LinearTransformation.rst @@ -20,4 +20,4 @@ mindspore.dataset.vision.LinearTransformation 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.MixUp.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.MixUp.rst index 730d1ab45a1..5debfe36522 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.MixUp.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.MixUp.rst @@ -21,4 +21,4 @@ mindspore.dataset.vision.MixUp 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.MixUpBatch.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.MixUpBatch.rst index cd6678c2266..1effd9e68aa 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.MixUpBatch.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.MixUpBatch.rst @@ -19,4 +19,4 @@ mindspore.dataset.vision.MixUpBatch 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Normalize.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Normalize.rst index 479450dd185..ae117ec68fe 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Normalize.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Normalize.rst @@ -26,7 +26,7 @@ mindspore.dataset.vision.Normalize 教程样例: - `视觉变换样例库 - `_ + `_ .. py:method:: device(device_target="CPU") diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.NormalizePad.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.NormalizePad.rst index 34f27f12675..52449e3d53a 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.NormalizePad.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.NormalizePad.rst @@ -22,4 +22,4 @@ mindspore.dataset.vision.NormalizePad 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Pad.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Pad.rst index 30a7293db18..6acee6f071a 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Pad.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Pad.rst @@ -34,7 +34,7 @@ mindspore.dataset.vision.Pad 教程样例: - `视觉变换样例库 - `_ + `_ .. py:method:: device(device_target="CPU") diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.PadToSize.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.PadToSize.rst index 996b4e20d0a..9700e8146da 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.PadToSize.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.PadToSize.rst @@ -35,4 +35,4 @@ mindspore.dataset.vision.PadToSize 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Perspective.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Perspective.rst index ed64b60dae9..aa39259f8bd 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Perspective.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Perspective.rst @@ -21,7 +21,7 @@ mindspore.dataset.vision.Perspective 教程样例: - `视觉变换样例库 - `_ + `_ .. py:method:: device(device_target="CPU") diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Posterize.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Posterize.rst index a349a3342ad..4a9d004017b 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Posterize.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Posterize.rst @@ -15,4 +15,4 @@ mindspore.dataset.vision.Posterize 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandAugment.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandAugment.rst index e224b9456a0..aa1e92e19b7 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandAugment.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandAugment.rst @@ -33,4 +33,4 @@ mindspore.dataset.vision.RandAugment 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomAdjustSharpness.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomAdjustSharpness.rst index 1814024d4df..2b04521dab5 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomAdjustSharpness.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomAdjustSharpness.rst @@ -19,4 +19,4 @@ mindspore.dataset.vision.RandomAdjustSharpness 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomAffine.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomAffine.rst index 9e3a7c379ad..dd493e7566d 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomAffine.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomAffine.rst @@ -40,4 +40,4 @@ mindspore.dataset.vision.RandomAffine 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomAutoContrast.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomAutoContrast.rst index ac4198cb666..f3093192e7f 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomAutoContrast.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomAutoContrast.rst @@ -21,4 +21,4 @@ mindspore.dataset.vision.RandomAutoContrast 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomColor.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomColor.rst index 6ff571357ce..cb5afe50dac 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomColor.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomColor.rst @@ -16,4 +16,4 @@ mindspore.dataset.vision.RandomColor 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomColorAdjust.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomColorAdjust.rst index 9d0fced526e..0352e8dab79 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomColorAdjust.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomColorAdjust.rst @@ -34,4 +34,4 @@ mindspore.dataset.vision.RandomColorAdjust 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomCrop.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomCrop.rst index 08b1d3c37e6..cd8feb8ad59 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomCrop.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomCrop.rst @@ -41,4 +41,4 @@ mindspore.dataset.vision.RandomCrop 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomCropDecodeResize.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomCropDecodeResize.rst index df8b1f9db73..0f35c910707 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomCropDecodeResize.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomCropDecodeResize.rst @@ -29,4 +29,4 @@ mindspore.dataset.vision.RandomCropDecodeResize 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomCropWithBBox.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomCropWithBBox.rst index a5f772adeb1..5db89f37f68 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomCropWithBBox.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomCropWithBBox.rst @@ -39,4 +39,4 @@ mindspore.dataset.vision.RandomCropWithBBox 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomEqualize.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomEqualize.rst index f8882d194c9..d3fd80f780a 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomEqualize.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomEqualize.rst @@ -15,4 +15,4 @@ mindspore.dataset.vision.RandomEqualize 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomErasing.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomErasing.rst index 3e9df5125b4..b3a87cadfe4 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomErasing.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomErasing.rst @@ -30,4 +30,4 @@ mindspore.dataset.vision.RandomErasing 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomGrayscale.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomGrayscale.rst index f0ab1868dd2..56ded250888 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomGrayscale.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomGrayscale.rst @@ -14,4 +14,4 @@ mindspore.dataset.vision.RandomGrayscale 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomHorizontalFlip.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomHorizontalFlip.rst index a3dd0734046..9a96791679a 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomHorizontalFlip.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomHorizontalFlip.rst @@ -15,4 +15,4 @@ mindspore.dataset.vision.RandomHorizontalFlip 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomHorizontalFlipWithBBox.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomHorizontalFlipWithBBox.rst index 209001d7df3..341693696a3 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomHorizontalFlipWithBBox.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomHorizontalFlipWithBBox.rst @@ -15,4 +15,4 @@ mindspore.dataset.vision.RandomHorizontalFlipWithBBox 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomInvert.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomInvert.rst index ac7497fdefd..0c80e76054e 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomInvert.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomInvert.rst @@ -15,4 +15,4 @@ mindspore.dataset.vision.RandomInvert 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomLighting.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomLighting.rst index 3b0583f8b0f..d78f5a6a242 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomLighting.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomLighting.rst @@ -15,4 +15,4 @@ mindspore.dataset.vision.RandomLighting 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomPerspective.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomPerspective.rst index 2c92ad923cc..6b61bb2b76b 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomPerspective.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomPerspective.rst @@ -20,4 +20,4 @@ mindspore.dataset.vision.RandomPerspective 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomPosterize.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomPosterize.rst index 7da68fde1bb..16c112957fd 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomPosterize.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomPosterize.rst @@ -16,4 +16,4 @@ mindspore.dataset.vision.RandomPosterize 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomResize.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomResize.rst index 9eb55763657..807b0634f23 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomResize.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomResize.rst @@ -15,4 +15,4 @@ mindspore.dataset.vision.RandomResize 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomResizeWithBBox.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomResizeWithBBox.rst index 1b9cf016e33..7c43332f2ad 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomResizeWithBBox.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomResizeWithBBox.rst @@ -15,4 +15,4 @@ mindspore.dataset.vision.RandomResizeWithBBox 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomResizedCrop.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomResizedCrop.rst index 0b482399197..2fa6d19675b 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomResizedCrop.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomResizedCrop.rst @@ -28,4 +28,4 @@ mindspore.dataset.vision.RandomResizedCrop 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomResizedCropWithBBox.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomResizedCropWithBBox.rst index 7740141c96d..c9c24a92a25 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomResizedCropWithBBox.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomResizedCropWithBBox.rst @@ -27,4 +27,4 @@ mindspore.dataset.vision.RandomResizedCropWithBBox 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomRotation.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomRotation.rst index 4b3820d4eea..83ccbe03e67 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomRotation.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomRotation.rst @@ -24,4 +24,4 @@ mindspore.dataset.vision.RandomRotation 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomSelectSubpolicy.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomSelectSubpolicy.rst index 29a2283167b..46245ecf6a7 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomSelectSubpolicy.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomSelectSubpolicy.rst @@ -13,4 +13,4 @@ mindspore.dataset.vision.RandomSelectSubpolicy 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomSharpness.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomSharpness.rst index 8fc1fdc3042..48f541d4a64 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomSharpness.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomSharpness.rst @@ -15,4 +15,4 @@ mindspore.dataset.vision.RandomSharpness 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomSolarize.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomSolarize.rst index 13b5be2687a..aef638022ee 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomSolarize.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomSolarize.rst @@ -14,4 +14,4 @@ mindspore.dataset.vision.RandomSolarize 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomVerticalFlip.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomVerticalFlip.rst index 93c15233e79..5e0b379a953 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomVerticalFlip.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomVerticalFlip.rst @@ -15,4 +15,4 @@ mindspore.dataset.vision.RandomVerticalFlip 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomVerticalFlipWithBBox.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomVerticalFlipWithBBox.rst index c33c31de8d3..76a8960b93e 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomVerticalFlipWithBBox.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RandomVerticalFlipWithBBox.rst @@ -15,4 +15,4 @@ mindspore.dataset.vision.RandomVerticalFlipWithBBox 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Rescale.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Rescale.rst index 26c170b2ec7..abc1ddbdc3c 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Rescale.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Rescale.rst @@ -17,4 +17,4 @@ mindspore.dataset.vision.Rescale 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Resize.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Resize.rst index f317f138769..1a38c75d9a0 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Resize.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Resize.rst @@ -26,7 +26,7 @@ mindspore.dataset.vision.Resize 教程样例: - `视觉变换样例库 - `_ + `_ .. py:method:: device(device_target="CPU") diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ResizeWithBBox.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ResizeWithBBox.rst index 1696a95c662..a85c4cd860f 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ResizeWithBBox.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ResizeWithBBox.rst @@ -18,4 +18,4 @@ mindspore.dataset.vision.ResizeWithBBox 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ResizedCrop.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ResizedCrop.rst index 6fcc60150af..b9c0326d86b 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ResizedCrop.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ResizedCrop.rst @@ -34,7 +34,7 @@ mindspore.dataset.vision.ResizedCrop 教程样例: - `视觉变换样例库 - `_ + `_ .. py:method:: device(device_target="CPU") diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RgbToHsv.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RgbToHsv.rst index 8b32c4135e6..df35d0cf37d 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RgbToHsv.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.RgbToHsv.rst @@ -13,4 +13,4 @@ mindspore.dataset.vision.RgbToHsv 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Rotate.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Rotate.rst index e487ddf0e35..b495a35f8e0 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Rotate.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Rotate.rst @@ -24,4 +24,4 @@ mindspore.dataset.vision.Rotate 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.SlicePatches.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.SlicePatches.rst index 270beca9ca4..25895d1ad2d 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.SlicePatches.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.SlicePatches.rst @@ -23,4 +23,4 @@ mindspore.dataset.vision.SlicePatches 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Solarize.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Solarize.rst index ecc248e3acc..a890181176b 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Solarize.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.Solarize.rst @@ -15,4 +15,4 @@ mindspore.dataset.vision.Solarize 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.TenCrop.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.TenCrop.rst index 0ff74dffa82..12501726c2f 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.TenCrop.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.TenCrop.rst @@ -16,4 +16,4 @@ mindspore.dataset.vision.TenCrop 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ToNumpy.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ToNumpy.rst index 5261908c8c0..561ca859a85 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ToNumpy.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ToNumpy.rst @@ -7,4 +7,4 @@ mindspore.dataset.vision.ToNumpy 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ToPIL.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ToPIL.rst index 855eaf7a399..0f799518de7 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ToPIL.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ToPIL.rst @@ -10,4 +10,4 @@ mindspore.dataset.vision.ToPIL 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ToTensor.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ToTensor.rst index b53b836a320..cb8d91b2480 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ToTensor.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ToTensor.rst @@ -14,4 +14,4 @@ mindspore.dataset.vision.ToTensor 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ToType.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ToType.rst index af6b907d3cc..6dd1c9c18e2 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ToType.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.ToType.rst @@ -17,4 +17,4 @@ mindspore.dataset.vision.ToType 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.TrivialAugmentWide.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.TrivialAugmentWide.rst index eadd81329a1..849a208e286 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.TrivialAugmentWide.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.TrivialAugmentWide.rst @@ -27,4 +27,4 @@ mindspore.dataset.vision.TrivialAugmentWide 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.UniformAugment.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.UniformAugment.rst index 3e77f23f16d..11e5cae8f20 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.UniformAugment.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.UniformAugment.rst @@ -18,4 +18,4 @@ mindspore.dataset.vision.UniformAugment 教程样例: - `视觉变换样例库 - `_ + `_ diff --git a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.VerticalFlip.rst b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.VerticalFlip.rst index 8e42fd394cc..d55be1518a7 100644 --- a/docs/api/api_python/dataset_vision/mindspore.dataset.vision.VerticalFlip.rst +++ b/docs/api/api_python/dataset_vision/mindspore.dataset.vision.VerticalFlip.rst @@ -12,7 +12,7 @@ mindspore.dataset.vision.VerticalFlip 教程样例: - `视觉变换样例库 - `_ + `_ .. py:method:: device(device_target="CPU") diff --git a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.ASGD.rst b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.ASGD.rst index 04fb919467b..5982134b163 100644 --- a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.ASGD.rst +++ b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.ASGD.rst @@ -6,7 +6,7 @@ mindspore.experimental.optim.ASGD Averaged Stochastic Gradient Descent 算法的实现。 .. warning:: - 这是一个实验性的优化器接口,需要和 `LRScheduler `_ 下的动态学习率接口配合使用。 + 这是一个实验性的优化器接口,需要和 `LRScheduler `_ 下的动态学习率接口配合使用。 参数: - **params** (Union[list(Parameter), list(dict)]) - 网络参数的列表或指定了参数组的列表。 diff --git a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.Adadelta.rst b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.Adadelta.rst index 2bf3d1a25c8..83dd04a4329 100644 --- a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.Adadelta.rst +++ b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.Adadelta.rst @@ -32,7 +32,7 @@ mindspore.experimental.optim.Adadelta \end{aligned} .. warning:: - 这是一个实验性的优化器接口,需要和 `LRScheduler `_ 下的动态学习率接口配合使用。 + 这是一个实验性的优化器接口,需要和 `LRScheduler `_ 下的动态学习率接口配合使用。 参数: - **params** (Union[list(Parameter), list(dict)]) - 网络参数的列表或指定了参数组的列表。 diff --git a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.Adagrad.rst b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.Adagrad.rst index 76afab03260..49a9782bebc 100644 --- a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.Adagrad.rst +++ b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.Adagrad.rst @@ -29,7 +29,7 @@ mindspore.experimental.optim.Adagrad \end{aligned} .. warning:: - 这是一个实验性的优化器接口,需要和 `LRScheduler `_ 下的动态学习率接口配合使用。 + 这是一个实验性的优化器接口,需要和 `LRScheduler `_ 下的动态学习率接口配合使用。 参数: - **params** (Union[list(Parameter), list(dict)]) - 网络参数的列表或指定了参数组的列表。 diff --git a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.Adam.rst b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.Adam.rst index 6d0e7a1bff5..64795c27cfc 100644 --- a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.Adam.rst +++ b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.Adam.rst @@ -38,7 +38,7 @@ mindspore.experimental.optim.Adam \end{aligned} .. warning:: - 这是一个实验性的优化器接口,需要和 `LRScheduler `_ 下的动态学习率接口配合使用。 + 这是一个实验性的优化器接口,需要和 `LRScheduler `_ 下的动态学习率接口配合使用。 参数: - **params** (Union[list(Parameter), list(dict)]) - 网络参数的列表或指定了参数组的列表。 diff --git a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.AdamW.rst b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.AdamW.rst index f758d825edb..5ac65ca0b15 100644 --- a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.AdamW.rst +++ b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.AdamW.rst @@ -38,7 +38,7 @@ mindspore.experimental.optim.AdamW \end{aligned} .. warning:: - 这是一个实验性的优化器接口,需要和 `LRScheduler `_ 下的动态学习率接口配合使用。 + 这是一个实验性的优化器接口,需要和 `LRScheduler `_ 下的动态学习率接口配合使用。 参数: - **params** (Union[list(Parameter), list(dict)]) - 网络参数的列表或指定了参数组的列表。 diff --git a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.Adamax.rst b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.Adamax.rst index c731da4f032..fcf134cb39b 100644 --- a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.Adamax.rst +++ b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.Adamax.rst @@ -30,7 +30,7 @@ mindspore.experimental.optim.Adamax \end{aligned} .. warning:: - 这是一个实验性的优化器接口,需要和 `LRScheduler `_ 下的动态学习率接口配合使用。 + 这是一个实验性的优化器接口,需要和 `LRScheduler `_ 下的动态学习率接口配合使用。 参数: - **params** (Union[list(Parameter), list(dict)]) - 网络参数的列表或指定了参数组的列表。 diff --git a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.NAdam.rst b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.NAdam.rst index 0d0c9e7ebbd..caaef804814 100644 --- a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.NAdam.rst +++ b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.NAdam.rst @@ -6,7 +6,7 @@ mindspore.experimental.optim.NAdam NAdam算法的实现。 .. warning:: - 这是一个实验性的优化器接口,需要和 `LRScheduler `_ 下的动态学习率接口配合使用。 + 这是一个实验性的优化器接口,需要和 `LRScheduler `_ 下的动态学习率接口配合使用。 参数: - **params** (Union[list(Parameter), list(dict)]) - 网络参数的列表或指定了参数组的列表。 diff --git a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.Optimizer.rst b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.Optimizer.rst index 3a62ad16ad7..02768fee9bc 100644 --- a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.Optimizer.rst +++ b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.Optimizer.rst @@ -6,7 +6,7 @@ mindspore.experimental.optim.Optimizer 用于参数更新的优化器基类。 .. warning:: - 这是一个实验性的优化器模块,需要和 `LRScheduler `_ 下的动态学习率接口配合使用。 + 这是一个实验性的优化器模块,需要和 `LRScheduler `_ 下的动态学习率接口配合使用。 参数: - **params** (Union[list(Parameter), list(dict)]) - 网络参数的列表或指定了参数组的列表。 diff --git a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.RAdam.rst b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.RAdam.rst index 2a8957ee7f3..7fbc9027cc6 100644 --- a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.RAdam.rst +++ b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.RAdam.rst @@ -40,7 +40,7 @@ mindspore.experimental.optim.RAdam \end{aligned} .. warning:: - 这是一个实验性的优化器接口,需要和 `LRScheduler `_ 下的动态学习率接口配合使用。 + 这是一个实验性的优化器接口,需要和 `LRScheduler `_ 下的动态学习率接口配合使用。 参数: - **params** (Union[list(Parameter), list(dict)]) - 网络参数的列表或指定了参数组的列表。 diff --git a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.RMSprop.rst b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.RMSprop.rst index d2c86e81b2f..33400ebfdc8 100644 --- a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.RMSprop.rst +++ b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.RMSprop.rst @@ -6,7 +6,7 @@ mindspore.experimental.optim.RMSprop RMSprop 算法的实现。 .. warning:: - 这是一个实验性的优化器接口,需要和 `LRScheduler `_ 下的动态学习率接口配合使用。 + 这是一个实验性的优化器接口,需要和 `LRScheduler `_ 下的动态学习率接口配合使用。 参数: - **params** (Union[list(Parameter), list(dict)]) - 网络参数的列表或指定了参数组的列表。 diff --git a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.Rprop.rst b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.Rprop.rst index f11718dfa8b..8c50e4c1c97 100644 --- a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.Rprop.rst +++ b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.Rprop.rst @@ -6,7 +6,7 @@ mindspore.experimental.optim.Rprop Rprop 算法的实现。 .. warning:: - 这是一个实验性的优化器接口,需要和 `LRScheduler `_ 下的动态学习率接口配合使用。 + 这是一个实验性的优化器接口,需要和 `LRScheduler `_ 下的动态学习率接口配合使用。 参数: - **params** (Union[list(Parameter), list(dict)]) - 网络参数的列表或指定了参数组的列表。 diff --git a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.SGD.rst b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.SGD.rst index 4550ea4cfcb..38e805d40f7 100644 --- a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.SGD.rst +++ b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.SGD.rst @@ -21,7 +21,7 @@ mindspore.experimental.optim.SGD 需要注意的是,对于训练的第一步 :math:`v_{t+1} = gradient`。其中,p、v和u分别表示 `parameters`、`accum` 和 `momentum`。 .. warning:: - 这是一个实验性的优化器接口,需要和 `LRScheduler `_ 下的动态学习率接口配合使用。 + 这是一个实验性的优化器接口,需要和 `LRScheduler `_ 下的动态学习率接口配合使用。 参数: - **params** (Union[list(Parameter), list(dict)]) - 网络参数的列表或指定了参数组的列表。 diff --git a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.ConstantLR.rst b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.ConstantLR.rst index 5398c01ab56..a4a502379f1 100644 --- a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.ConstantLR.rst +++ b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.ConstantLR.rst @@ -6,7 +6,7 @@ mindspore.experimental.optim.lr_scheduler.ConstantLR 将每个参数组的学习率按照衰减因子 `factor` 进行衰减,直到 `last_epoch` 达到 `total_iters`。注意,这种衰减可能与外部对于学习率的改变同时发生。 .. warning:: - 这是一个实验性的动态学习率接口,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 + 这是一个实验性的动态学习率接口,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 参数: - **optimizer** (:class:`mindspore.experimental.optim.Optimizer`) - 优化器实例。 diff --git a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.CosineAnnealingLR.rst b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.CosineAnnealingLR.rst index 9bbbe496769..30a72e62e89 100644 --- a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.CosineAnnealingLR.rst +++ b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.CosineAnnealingLR.rst @@ -18,7 +18,7 @@ mindspore.experimental.optim.lr_scheduler.CosineAnnealingLR 详情请查看 `SGDR: Stochastic Gradient Descent with Warm Restarts `_。 .. warning:: - 这是一个实验性的动态学习率接口,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 + 这是一个实验性的动态学习率接口,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 参数: - **optimizer** (:class:`mindspore.experimental.optim.Optimizer`) - 优化器实例。 diff --git a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.CosineAnnealingWarmRestarts.rst b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.CosineAnnealingWarmRestarts.rst index 8ff6aff0a57..9f9ca10141d 100644 --- a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.CosineAnnealingWarmRestarts.rst +++ b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.CosineAnnealingWarmRestarts.rst @@ -14,7 +14,7 @@ mindspore.experimental.optim.lr_scheduler.CosineAnnealingWarmRestarts 详情请查看 `SGDR: Stochastic Gradient Descent with Warm Restarts `_。 .. warning:: - 这是一个实验性的动态学习率接口,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 + 这是一个实验性的动态学习率接口,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 参数: - **optimizer** (:class:`mindspore.experimental.optim.Optimizer`) - 优化器实例。 diff --git a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.CyclicLR.rst b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.CyclicLR.rst index 72fc4b21e95..3d2f22bf972 100644 --- a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.CyclicLR.rst +++ b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.CyclicLR.rst @@ -12,7 +12,7 @@ mindspore.experimental.optim.lr_scheduler.CyclicLR - "exp_range": 在每个迭代中按照 :math:`\text{gamma}^{\text{cycle iterations}}` 缩放初始幅度。 .. warning:: - 这是一个实验性的动态学习率接口,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 + 这是一个实验性的动态学习率接口,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 参数: - **optimizer** (:class:`mindspore.experimental.optim.Optimizer`) - 优化器实例。 diff --git a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.ExponentialLR.rst b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.ExponentialLR.rst index 2bc8718d546..4807bc7d322 100644 --- a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.ExponentialLR.rst +++ b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.ExponentialLR.rst @@ -6,7 +6,7 @@ mindspore.experimental.optim.lr_scheduler.ExponentialLR 每个epoch呈指数衰减的学习率,即乘以 `gamma` 。注意,这种衰减可能与外部对于学习率的改变同时发生。 .. warning:: - 这是一个实验性的动态学习率接口,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 + 这是一个实验性的动态学习率接口,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 参数: - **optimizer** (:class:`mindspore.experimental.optim.Optimizer`) - 优化器实例。 diff --git a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.LRScheduler.rst b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.LRScheduler.rst index b80e822cdb4..23b6c4cd149 100644 --- a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.LRScheduler.rst +++ b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.LRScheduler.rst @@ -6,7 +6,7 @@ mindspore.experimental.optim.lr_scheduler.LRScheduler 动态学习率的基类。 .. warning:: - 这是一个实验性的动态学习率模块,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 + 这是一个实验性的动态学习率模块,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 参数: - **optimizer** (:class:`mindspore.experimental.optim.Optimizer`) - 优化器实例。 diff --git a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.LambdaLR.rst b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.LambdaLR.rst index 2f680a8e1b2..079b4f5e6f2 100644 --- a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.LambdaLR.rst +++ b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.LambdaLR.rst @@ -6,7 +6,7 @@ mindspore.experimental.optim.lr_scheduler.LambdaLR 将每个参数组的学习率设定为初始学习率乘以指定的 `lr_lambda` 函数。当 `last_epoch = -1` 时,将学习率设置成初始学习率。 .. warning:: - 这是一个实验性的动态学习率接口,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 + 这是一个实验性的动态学习率接口,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 参数: - **optimizer** (:class:`mindspore.experimental.optim.Optimizer`) - 优化器实例。 diff --git a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.LinearLR.rst b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.LinearLR.rst index 1da6a2a47f8..ab6051b286f 100644 --- a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.LinearLR.rst +++ b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.LinearLR.rst @@ -6,7 +6,7 @@ mindspore.experimental.optim.lr_scheduler.LinearLR 线性减小学习率乘法因子 ,并将每个参数组的学习率按照此乘法因子进行衰减,直到 `last_epoch` 数达到 `total_iters`。注意,这种衰减可能与外部对于学习率的改变同时发生。 .. warning:: - 这是一个实验性的动态学习率接口,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 + 这是一个实验性的动态学习率接口,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 参数: - **optimizer** (:class:`mindspore.experimental.optim.Optimizer`) - 优化器实例。 diff --git a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.MultiStepLR.rst b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.MultiStepLR.rst index ddae498387f..721b493bbe6 100644 --- a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.MultiStepLR.rst +++ b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.MultiStepLR.rst @@ -6,7 +6,7 @@ mindspore.experimental.optim.lr_scheduler.MultiStepLR 当epoch/step达到 `milestones` 时,将每个参数组的学习率按照乘法因子 `gamma` 进行变化。注意,这种衰减可能与外部对于学习率的改变同时发生。 .. warning:: - 这是一个实验性的动态学习率接口,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 + 这是一个实验性的动态学习率接口,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 参数: - **optimizer** (:class:`mindspore.experimental.optim.Optimizer`) - 优化器实例。 diff --git a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.MultiplicativeLR.rst b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.MultiplicativeLR.rst index d3b9fb999b1..922fabaebb0 100644 --- a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.MultiplicativeLR.rst +++ b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.MultiplicativeLR.rst @@ -6,7 +6,7 @@ mindspore.experimental.optim.lr_scheduler.MultiplicativeLR 将每个参数组当前的学习率按照传入的 `lr_lambda` 函数乘以指定的乘法因子。当 `last_epoch = -1` 时,将学习率设置成初始学习率。 .. warning:: - 这是一个实验性的动态学习率接口,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 + 这是一个实验性的动态学习率接口,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 参数: - **optimizer** (:class:`mindspore.experimental.optim.Optimizer`) - 优化器实例。 diff --git a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.PolynomialLR.rst b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.PolynomialLR.rst index dfe3a99913e..ecdac37d7fc 100644 --- a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.PolynomialLR.rst +++ b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.PolynomialLR.rst @@ -15,7 +15,7 @@ mindspore.experimental.optim.lr_scheduler.PolynomialLR \end{split} .. warning:: - 这是一个实验性的动态学习率接口,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 + 这是一个实验性的动态学习率接口,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 参数: - **optimizer** (:class:`mindspore.experimental.optim.Optimizer`) - 优化器实例。 diff --git a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.ReduceLROnPlateau.rst b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.ReduceLROnPlateau.rst index 865a851d53f..6b0c57c023e 100644 --- a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.ReduceLROnPlateau.rst +++ b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.ReduceLROnPlateau.rst @@ -6,7 +6,7 @@ mindspore.experimental.optim.lr_scheduler.ReduceLROnPlateau 当指标停止改进时降低学习率。训练中学习停滞情况下,模型通常会受益于将学习率降低2-10倍。该调度程序在执行过程中读取 `step` 方法中传入的指标 `metrics`,如果在 `patience` 的时期内没有得到改进,则学习率会降低。 .. warning:: - 这是一个实验性的动态学习率接口,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 + 这是一个实验性的动态学习率接口,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 参数: - **optimizer** (:class:`mindspore.experimental.optim.Optimizer`) - 优化器实例。 diff --git a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.SequentialLR.rst b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.SequentialLR.rst index 40da56e0b0f..859caeaaadc 100644 --- a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.SequentialLR.rst +++ b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.SequentialLR.rst @@ -6,7 +6,7 @@ mindspore.experimental.optim.lr_scheduler.SequentialLR `SequentialLR` 接收一个将被顺序调用的学习率调度器列表 `schedulers`,及指定的间隔列表 `milestone`,`milestone` 设定了每个epoch哪个调度器被调用。 .. warning:: - 这是一个实验性的动态学习率接口,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 + 这是一个实验性的动态学习率接口,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 参数: - **optimizer** (:class:`mindspore.experimental.optim.Optimizer`) - 优化器实例。 diff --git a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.StepLR.rst b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.StepLR.rst index 6e85a588315..450090fd0dd 100644 --- a/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.StepLR.rst +++ b/docs/api/api_python/experimental/optim/mindspore.experimental.optim.lr_scheduler.StepLR.rst @@ -6,7 +6,7 @@ mindspore.experimental.optim.lr_scheduler.StepLR 每 `step_size` 个epoch按 `gamma` 衰减每个参数组的学习率。`StepLR` 对于学习率的衰减可能与外部对于学习率的改变同时发生。 .. warning:: - 这是一个实验性的动态学习率接口,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 + 这是一个实验性的动态学习率接口,需要和 `mindspore.experimental.optim `_ 下的接口配合使用。 参数: - **optimizer** (:class:`mindspore.experimental.optim.Optimizer`) - 优化器实例。 diff --git a/docs/api/api_python/mindspore.communication.rst b/docs/api/api_python/mindspore.communication.rst index c77ad7e4989..342b0351345 100644 --- a/docs/api/api_python/mindspore.communication.rst +++ b/docs/api/api_python/mindspore.communication.rst @@ -4,11 +4,11 @@ mindspore.communication 注意,集合通信接口需要先配置好通信环境变量。 -针对Ascend设备,用户需要准备rank表,设置rank_id和device_id,详见 `rank table启动 `_ 。 +针对Ascend设备,用户需要准备rank表,设置rank_id和device_id,详见 `rank table启动 `_ 。 -针对GPU设备,用户需要准备host文件和mpi,详见 `mpirun启动 `_ 。 +针对GPU设备,用户需要准备host文件和mpi,详见 `mpirun启动 `_ 。 -针对CPU设备,用户需要编写动态组网启动脚本,详见 `动态组网启动 `_ 。 +针对CPU设备,用户需要编写动态组网启动脚本,详见 `动态组网启动 `_ 。 .. py:class:: mindspore.communication.GlobalComm diff --git a/docs/api/api_python/mindspore.dataset.rst b/docs/api/api_python/mindspore.dataset.rst index 762aa8f4dd0..37152d8829a 100644 --- a/docs/api/api_python/mindspore.dataset.rst +++ b/docs/api/api_python/mindspore.dataset.rst @@ -8,7 +8,7 @@ mindspore.dataset 大多数数据集可以通过指定参数 `cache` 启用缓存服务,以提升整体数据处理效率。 请注意Windows平台上还不支持缓存服务,因此在Windows上加载和处理数据时,请勿使用。更多介绍和限制, -请参考 `Single-Node Tensor Cache `_ 。 +请参考 `Single-Node Tensor Cache `_ 。 在API示例中,常用的模块导入方法如下: @@ -38,9 +38,9 @@ mindspore.dataset - 数据集操作(filter/ skip):用户通过数据集对象方法 `.shuffle` / `.filter` / `.skip` / `.split` / `.take` / … 来实现数据集的进一步混洗、过滤、跳过、最多获取条数等操作; - 数据集样本增强操作(map):用户可以将数据增强操作 - (`vision类 `_ , - `nlp类 `_ , - `audio类 `_ ) + (`vision类 `_ , + `nlp类 `_ , + `audio类 `_ ) 添加到map操作中执行,数据预处理过程中可以定义多个map操作,用于执行不同增强操作,数据增强操作也可以是 用户自定义增强的 `PyFunc` ; - 批(batch):用户在样本完成增强后,使用 `.batch` 操作将多个样本组织成batch,也可以通过batch的参数 `per_batch_map` @@ -51,7 +51,7 @@ mindspore.dataset 数据处理Pipeline快速上手 ------------------------- -如何快速使用Dataset Pipeline,可以将 `使用数据Pipeline加载 & 处理数据集 `_ 下载到本地,按照顺序执行并观察输出结果。 +如何快速使用Dataset Pipeline,可以将 `使用数据Pipeline加载 & 处理数据集 `_ 下载到本地,按照顺序执行并观察输出结果。 视觉 ----- diff --git a/docs/api/api_python/mindspore.dataset.transforms.rst b/docs/api/api_python/mindspore.dataset.transforms.rst index 08d928b34da..043b911fa66 100644 --- a/docs/api/api_python/mindspore.dataset.transforms.rst +++ b/docs/api/api_python/mindspore.dataset.transforms.rst @@ -20,7 +20,7 @@ mindspore.dataset.transforms from mindspore.dataset.transforms import c_transforms from mindspore.dataset.transforms import py_transforms -更多详情请参考 `通用数据变换 `_ 。 +更多详情请参考 `通用数据变换 `_ 。 常用数据处理术语说明如下: @@ -80,7 +80,7 @@ API样例中常用的导入模块如下: import mindspore.dataset.vision.py_transforms as py_vision from mindspore.dataset.transforms import c_transforms -更多详情请参考 `视觉数据变换 `_ 。 +更多详情请参考 `视觉数据变换 `_ 。 常用数据处理术语说明如下: @@ -89,13 +89,13 @@ API样例中常用的导入模块如下: 数据增强操作可以放入数据处理Pipeline中执行,也可以Eager模式执行: -- Pipeline模式用于流式处理大型数据集,示例可参考 `数据处理Pipeline介绍 `_ 。 -- Eager模式用于函数调用方式处理样本,示例可参考 `轻量化数据处理 `_ 。 +- Pipeline模式用于流式处理大型数据集,示例可参考 `数据处理Pipeline介绍 `_ 。 +- Eager模式用于函数调用方式处理样本,示例可参考 `轻量化数据处理 `_ 。 样例库 ^^^^^^ -快速上手使用视觉类变换的API,跳转参考 `视觉变换样例库 `_ 。 +快速上手使用视觉类变换的API,跳转参考 `视觉变换样例库 `_ 。 此指南中展示了多个变换API的用法,以及输入输出结果。 变换 @@ -226,7 +226,7 @@ API样例中常用的导入模块如下: import mindspore.dataset as ds from mindspore.dataset import text -更多详情请参考 `文本数据变换 `_ 。 +更多详情请参考 `文本数据变换 `_ 。 常用数据处理术语说明如下: @@ -235,13 +235,13 @@ API样例中常用的导入模块如下: 数据增强操作可以放入数据处理Pipeline中执行,也可以Eager模式执行: -- Pipeline模式用于流式处理大型数据集,示例可参考 `数据处理Pipeline介绍 `_ 。 -- Eager模式用于函数调用方式处理样本,示例可参考 `轻量化数据处理 `_ 。 +- Pipeline模式用于流式处理大型数据集,示例可参考 `数据处理Pipeline介绍 `_ 。 +- Eager模式用于函数调用方式处理样本,示例可参考 `轻量化数据处理 `_ 。 样例库 ^^^^^^ -快速上手使用文本变换的API,跳转参考 `文本变换样例库 `_ 。 +快速上手使用文本变换的API,跳转参考 `文本变换样例库 `_ 。 此指南中展示了多个变换API的用法,以及输入输出结果。 变换 @@ -305,13 +305,13 @@ API样例中常用的导入模块如下: 数据增强操作可以放入数据处理Pipeline中执行,也可以Eager模式执行: -- Pipeline模式用于流式处理大型数据集,示例可参考 `数据处理Pipeline介绍 `_ 。 -- Eager模式用于函数调用方式处理样本,示例可参考 `轻量化数据处理 `_ 。 +- Pipeline模式用于流式处理大型数据集,示例可参考 `数据处理Pipeline介绍 `_ 。 +- Eager模式用于函数调用方式处理样本,示例可参考 `轻量化数据处理 `_ 。 样例库 ^^^^^^ -快速上手使用音频变换的API,跳转参考 `音频变换样例库 `_ 。 +快速上手使用音频变换的API,跳转参考 `音频变换样例库 `_ 。 此指南中展示了多个变换API的用法,以及输入输出结果。 变换 diff --git a/docs/api/api_python/mindspore.experimental.rst b/docs/api/api_python/mindspore.experimental.rst index 89615bdd1d6..a227f85a210 100644 --- a/docs/api/api_python/mindspore.experimental.rst +++ b/docs/api/api_python/mindspore.experimental.rst @@ -35,7 +35,7 @@ LRScheduler类 from mindspore import nn from mindspore.experimental import optim # Define the network structure of LeNet5. Refer to - # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py net = LeNet5() loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True) @@ -52,7 +52,7 @@ LRScheduler类 return loss for epoch in range(6): # Create the dataset taking MNIST as an example. Refer to - # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/mnist.py + # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/mnist.py for data, label in create_dataset(need_download=False): train_step(data, label) diff --git a/docs/api/api_python/mindspore.nn.rst b/docs/api/api_python/mindspore.nn.rst index c9e6f6f3d12..244dbc391aa 100644 --- a/docs/api/api_python/mindspore.nn.rst +++ b/docs/api/api_python/mindspore.nn.rst @@ -5,9 +5,9 @@ mindspore.nn 用于构建神经网络中的预定义构建块或计算单元。 -动态shape的支持情况详见 `nn接口动态shape支持情况 `_ 。 +动态shape的支持情况详见 `nn接口动态shape支持情况 `_ 。 -MindSpore中 `mindspore.nn` 接口与上一版本相比,新增、删除和支持平台的变化信息请参考 `mindspore.nn API接口变更 `_ 。 +MindSpore中 `mindspore.nn` 接口与上一版本相比,新增、删除和支持平台的变化信息请参考 `mindspore.nn API接口变更 `_ 。 基本构成单元 ------------ diff --git a/docs/api/api_python/mindspore.numpy.rst b/docs/api/api_python/mindspore.numpy.rst index 104822e98a4..9dbcc5de3aa 100644 --- a/docs/api/api_python/mindspore.numpy.rst +++ b/docs/api/api_python/mindspore.numpy.rst @@ -658,7 +658,7 @@ mindspore.numpy能够充分利用MindSpore的强大功能,实现算子的自 ... Tensor(shape=[4], dtype=Float32, value= [ 2.00000000e+00, 2.00000000e+00, 2.00000000e+00, 2.00000000e+00])) - 更多细节可参考 `API GradOperation `_ 。 + 更多细节可参考 `API GradOperation `_ 。 - mindspore.set_context使用示例 @@ -684,7 +684,7 @@ mindspore.numpy能够充分利用MindSpore的强大功能,实现算子的自 set_context(device_target="Ascend") ... - 更多细节可参考 `API mindspore.set_context `_ 。 + 更多细节可参考 `API mindspore.set_context `_ 。 - mindspore.numpy使用示例 diff --git a/docs/api/api_python/mindspore.ops.primitive.rst b/docs/api/api_python/mindspore.ops.primitive.rst index 444cafdff44..140fda85fcb 100644 --- a/docs/api/api_python/mindspore.ops.primitive.rst +++ b/docs/api/api_python/mindspore.ops.primitive.rst @@ -3,11 +3,11 @@ mindspore.ops.primitive 可用于Cell的构造函数的算子。 -动态shape的支持情况详见 `算子动态shape支持情况 `_ 。 +动态shape的支持情况详见 `算子动态shape支持情况 `_ 。 -bfloat16数据类型的支持情况详见 `支持列表 `_ 。 +bfloat16数据类型的支持情况详见 `支持列表 `_ 。 -算子级并行过程各算子的使用约束详见 `算子级并行使用约束 `_ 。 +算子级并行过程各算子的使用约束详见 `算子级并行使用约束 `_ 。 模块导入方法如下: @@ -15,7 +15,7 @@ bfloat16数据类型的支持情况详见 `支持列表 `_ 。 +MindSpore中 `mindspore.ops.primitive` 接口与上一版本相比,新增、删除和支持平台的变化信息请参考 `mindspore.ops.primitive API接口变更 `_ 。 算子原语 ---------- @@ -614,15 +614,15 @@ Parameter操作算子 通信算子 ---------------- -在分布式训练中进行数据传输涉及通信操作,详情请参考 `分布式集合通信原语 `_ 。 +在分布式训练中进行数据传输涉及通信操作,详情请参考 `分布式集合通信原语 `_ 。 注意,以下列表中的接口需要先配置好通信环境变量。 -针对Ascend设备,用户需要准备rank表,设置rank_id和device_id,详见 `rank table启动 `_ 。 +针对Ascend设备,用户需要准备rank表,设置rank_id和device_id,详见 `rank table启动 `_ 。 -针对GPU设备,用户需要准备host文件和mpi,详见 `mpirun启动 `_ 。 +针对GPU设备,用户需要准备host文件和mpi,详见 `mpirun启动 `_ 。 -针对CPU设备,用户需要编写动态组网启动脚本,详见 `动态组网启动 `_ 。 +针对CPU设备,用户需要编写动态组网启动脚本,详见 `动态组网启动 `_ 。 .. mscnplatwarnautosummary:: :toctree: ops diff --git a/docs/api/api_python/mindspore.ops.rst b/docs/api/api_python/mindspore.ops.rst index 218bddbdefa..1cfe335bb4a 100644 --- a/docs/api/api_python/mindspore.ops.rst +++ b/docs/api/api_python/mindspore.ops.rst @@ -1,9 +1,9 @@ mindspore.ops ============================= -动态shape的支持情况详见 `ops接口动态shape支持情况 `_ 。 +动态shape的支持情况详见 `ops接口动态shape支持情况 `_ 。 -MindSpore中 `mindspore.ops` 接口与上一版本相比,新增、删除和支持平台的变化信息请参考 `mindspore.ops API接口变更 `_ 。 +MindSpore中 `mindspore.ops` 接口与上一版本相比,新增、删除和支持平台的变化信息请参考 `mindspore.ops API接口变更 `_ 。 神经网络层函数 ---------------- diff --git a/docs/api/api_python/mindspore.rewrite.rst b/docs/api/api_python/mindspore.rewrite.rst index aeb2ce834b5..41ec954bcda 100644 --- a/docs/api/api_python/mindspore.rewrite.rst +++ b/docs/api/api_python/mindspore.rewrite.rst @@ -2,7 +2,7 @@ mindspore.rewrite ================= MindSpore的ReWrite模块为用户提供了基于自定义规则,对网络的前向计算过程进行修改的能力,如插入、删除和替换语句。 -如何快速使用ReWrite,请参考 `使用ReWrite修改网络 `_ 。 +如何快速使用ReWrite,请参考 `使用ReWrite修改网络 `_ 。 .. py:class:: mindspore.rewrite.Node(node: NodeImpl) diff --git a/docs/api/api_python/mindspore/Tensor/mindspore.Tensor.init_data.rst b/docs/api/api_python/mindspore/Tensor/mindspore.Tensor.init_data.rst index 5a057eb281d..27a819efd8b 100644 --- a/docs/api/api_python/mindspore/Tensor/mindspore.Tensor.init_data.rst +++ b/docs/api/api_python/mindspore/Tensor/mindspore.Tensor.init_data.rst @@ -10,7 +10,7 @@ mindspore.Tensor.init_data 参数: - **slice_index** (int) - 参数切片的索引。在初始化参数切片的时候使用,保证使用相同切片的设备可以生成相同的Tensor。默认值: ``None`` 。 - **shape** (list[int]) - 切片的shape,在初始化参数切片时使用。默认值: ``None`` 。 - - **opt_shard_group** (str) - 优化器分片组,在自动或半自动并行模式下用于获取参数的切片。关于优化器分组,请参考 `优化器并行 `_ 。默认值: ``None`` 。 + - **opt_shard_group** (str) - 优化器分片组,在自动或半自动并行模式下用于获取参数的切片。关于优化器分组,请参考 `优化器并行 `_ 。默认值: ``None`` 。 返回: 初始化的Tensor。 \ No newline at end of file diff --git a/docs/api/api_python/mindspore/mindspore.CSRTensor.rst b/docs/api/api_python/mindspore/mindspore.CSRTensor.rst index 1fc4a559014..c685a775ca5 100644 --- a/docs/api/api_python/mindspore/mindspore.CSRTensor.rst +++ b/docs/api/api_python/mindspore/mindspore.CSRTensor.rst @@ -13,7 +13,7 @@ mindspore.CSRTensor [0., 0., 2., 0.], [0., 0., 0., 0.]] - `CSRTensor` 的算术运算包括:加(+)、减(-)、乘(*)、除(/)。详细的算术运算支持请参考 `运算符 `_。 + `CSRTensor` 的算术运算包括:加(+)、减(-)、乘(*)、除(/)。详细的算术运算支持请参考 `运算符 `_。 .. warning:: - 这是一个实验性API,后续可能修改或删除。 diff --git a/docs/api/api_python/mindspore/mindspore.JitConfig.rst b/docs/api/api_python/mindspore/mindspore.JitConfig.rst index ec101d68d58..70231ae466e 100644 --- a/docs/api/api_python/mindspore/mindspore.JitConfig.rst +++ b/docs/api/api_python/mindspore/mindspore.JitConfig.rst @@ -20,7 +20,7 @@ mindspore.JitConfig - **jit_syntax_level** (str, 可选) - 设置JIT语法支持级别,其值必须为 ``"STRICT"``, ``"LAX"`` 或 ``""`` 。 默认是空字符串,表示忽略该项JitConfig配置,将使用ms.context的jit_syntax_level,ms.context请参考 - `set_context `_ 。 + `set_context `_ 。 默认值: ``""`` 。 - ``"STRICT"``: 仅支持基础语法,且执行性能最佳。可用于MindIR导入导出。 diff --git a/docs/api/api_python/mindspore/mindspore.Parameter.rst b/docs/api/api_python/mindspore/mindspore.Parameter.rst index db519a23bf9..263f8464b54 100644 --- a/docs/api/api_python/mindspore/mindspore.Parameter.rst +++ b/docs/api/api_python/mindspore/mindspore.Parameter.rst @@ -183,7 +183,7 @@ 教程样例: - `Parameter Server模式 - `_ + `_ .. py:method:: sliced :property: diff --git a/docs/api/api_python/mindspore/mindspore.ParameterTuple.rst b/docs/api/api_python/mindspore/mindspore.ParameterTuple.rst index 99596e838b9..6f1d8665f6c 100644 --- a/docs/api/api_python/mindspore/mindspore.ParameterTuple.rst +++ b/docs/api/api_python/mindspore/mindspore.ParameterTuple.rst @@ -25,4 +25,4 @@ mindspore.ParameterTuple 教程样例: - `Cell与参数 - Parameter Tuple - `_ \ No newline at end of file + `_ \ No newline at end of file diff --git a/docs/api/api_python/mindspore/mindspore.QuantDtype.rst b/docs/api/api_python/mindspore/mindspore.QuantDtype.rst index 6d39b80d2b6..370ef02b7fd 100644 --- a/docs/api/api_python/mindspore/mindspore.QuantDtype.rst +++ b/docs/api/api_python/mindspore/mindspore.QuantDtype.rst @@ -5,7 +5,7 @@ mindspore.QuantDtype MindSpore量化数据类型枚举类,包含 `INT1` ~ `INT16`,`UINT1` ~ `UINT16` 。 - `QuantDtype` 定义在 `dtype.py `_ 文件下 。运行以下命令导入环境: + `QuantDtype` 定义在 `dtype.py `_ 文件下 。运行以下命令导入环境: .. code-block:: diff --git a/docs/api/api_python/mindspore/mindspore.SummaryRecord.rst b/docs/api/api_python/mindspore/mindspore.SummaryRecord.rst index 5b4116e09ca..a85e50a74e7 100644 --- a/docs/api/api_python/mindspore/mindspore.SummaryRecord.rst +++ b/docs/api/api_python/mindspore/mindspore.SummaryRecord.rst @@ -54,13 +54,13 @@ mindspore.SummaryRecord - **name** (str) - 数据名称。 - **value** (Union[Tensor, GraphProto, TrainLineage, EvaluationLineage, DatasetGraph, UserDefinedInfo,LossLandscape]) - 待存储的值。 - - 当plugin为"graph"时,参数值的数据类型应为"GraphProto"对象。具体详情,请参见 `mindspore/ccsrc/anf_ir.proto `_ 。 + - 当plugin为"graph"时,参数值的数据类型应为"GraphProto"对象。具体详情,请参见 `mindspore/ccsrc/anf_ir.proto `_ 。 - 当plugin为"scalar"、"image"、"tensor"或"histogram"时,参数值的数据类型应为"Tensor"对象。 - - 当plugin为"train_lineage"时,参数值的数据类型应为"TrainLineage"对象。具体详情,请参见 `mindspore/ccsrc/lineage.proto `_ 。 - - 当plugin为"eval_lineage"时,参数值的数据类型应为"EvaluationLineage"对象。具体详情,请参见 `mindspore/ccsrc/lineage.proto `_ 。 - - 当plugin为"dataset_graph"时,参数值的数据类型应为"DatasetGraph"对象。具体详情,请参见 `mindspore/ccsrc/lineage.proto `_ 。 - - 当plugin为"custom_lineage_data"时,参数值的数据类型应为"UserDefinedInfo"对象。具体详情,请参见 `mindspore/ccsrc/lineage.proto `_ 。 - - 当plugin为"LANDSCAPE"时,参数值的数据类型应为"LossLandscape"对象。具体详情,请参见 `mindspore/ccsrc/summary.proto `_ 。 + - 当plugin为"train_lineage"时,参数值的数据类型应为"TrainLineage"对象。具体详情,请参见 `mindspore/ccsrc/lineage.proto `_ 。 + - 当plugin为"eval_lineage"时,参数值的数据类型应为"EvaluationLineage"对象。具体详情,请参见 `mindspore/ccsrc/lineage.proto `_ 。 + - 当plugin为"dataset_graph"时,参数值的数据类型应为"DatasetGraph"对象。具体详情,请参见 `mindspore/ccsrc/lineage.proto `_ 。 + - 当plugin为"custom_lineage_data"时,参数值的数据类型应为"UserDefinedInfo"对象。具体详情,请参见 `mindspore/ccsrc/lineage.proto `_ 。 + - 当plugin为"LANDSCAPE"时,参数值的数据类型应为"LossLandscape"对象。具体详情,请参见 `mindspore/ccsrc/summary.proto `_ 。 异常: - **ValueError** - `plugin` 的值不在可选值内。 @@ -97,7 +97,7 @@ mindspore.SummaryRecord bool,表示记录是否成功。 异常: - - **TypeError** - `step` 不为整型,或 `train_network` 的类型不为 `mindspore.nn.Cell `_ 。 + - **TypeError** - `step` 不为整型,或 `train_network` 的类型不为 `mindspore.nn.Cell `_ 。 .. py:method:: set_mode(mode) diff --git a/docs/api/api_python/mindspore/mindspore.dtype.rst b/docs/api/api_python/mindspore/mindspore.dtype.rst index 9b4771510e6..298b5a8eda2 100644 --- a/docs/api/api_python/mindspore/mindspore.dtype.rst +++ b/docs/api/api_python/mindspore/mindspore.dtype.rst @@ -41,7 +41,7 @@ mindspore.dtype ============================ ================= 类型 描述 ============================ ================= - ``Tensor`` MindSpore中的张量类型。数据格式采用NCHW。详情请参考 `tensor `_ 。 + ``Tensor`` MindSpore中的张量类型。数据格式采用NCHW。详情请参考 `tensor `_ 。 ``bool_`` 布尔型,值为 ``True`` 或者 ``False`` 。 ``int_`` 整数标量。 ``uint`` 无符号整数标量。 diff --git a/docs/api/api_python/mindspore/mindspore.export.rst b/docs/api/api_python/mindspore/mindspore.export.rst index 8a96d42e670..edca967cc4b 100644 --- a/docs/api/api_python/mindspore/mindspore.export.rst +++ b/docs/api/api_python/mindspore/mindspore.export.rst @@ -45,4 +45,4 @@ mindspore.export 教程样例: - `保存与加载 - 保存和加载MindIR - `_ \ No newline at end of file + `_ \ No newline at end of file diff --git a/docs/api/api_python/mindspore/mindspore.func_tensor.rst b/docs/api/api_python/mindspore/mindspore.func_tensor.rst index d256bc171f7..82b2dea8576 100644 --- a/docs/api/api_python/mindspore/mindspore.func_tensor.rst +++ b/docs/api/api_python/mindspore/mindspore.func_tensor.rst @@ -7,9 +7,9 @@ mindspore.tensor 在图模式下,MindSpore可以在运行时依据 `dtype` 参数来动态创建新Tensor。 - 详情请参考教程 `创建和使用Tensor `_ 。 + 详情请参考教程 `创建和使用Tensor `_ 。 - 有别于Tensor类,其与Tensor类的区别为内部增加了 `Annotation `_ 指示当前创建的Tensor的类型,与Tensor类相比能够防止AnyType的产生。 + 有别于Tensor类,其与Tensor类的区别为内部增加了 `Annotation `_ 指示当前创建的Tensor的类型,与Tensor类相比能够防止AnyType的产生。 参数和返回值与Tensor类完全一致。另参考::class:`mindspore.Tensor`。 diff --git a/docs/api/api_python/mindspore/mindspore.jacfwd.rst b/docs/api/api_python/mindspore/mindspore.jacfwd.rst index 70bde5163ed..79609e8cd70 100644 --- a/docs/api/api_python/mindspore/mindspore.jacfwd.rst +++ b/docs/api/api_python/mindspore/mindspore.jacfwd.rst @@ -3,7 +3,7 @@ mindspore.jacfwd .. py:function:: mindspore.jacfwd(fn, grad_position=0, has_aux=False) - 通过前向模式计算给定网络的Jacobian矩阵,对应 `前向模式自动微分 `_。当网络输出数量远大于输入数量时,使用前向模式求Jacobian矩阵比反向模式性能更好。 + 通过前向模式计算给定网络的Jacobian矩阵,对应 `前向模式自动微分 `_。当网络输出数量远大于输入数量时,使用前向模式求Jacobian矩阵比反向模式性能更好。 参数: - **fn** (Union[Cell, Function]) - 待求导的函数或网络。以Tensor为入参,返回Tensor或Tensor数组。 diff --git a/docs/api/api_python/mindspore/mindspore.jacrev.rst b/docs/api/api_python/mindspore/mindspore.jacrev.rst index db57c9ac988..3321770cdb6 100644 --- a/docs/api/api_python/mindspore/mindspore.jacrev.rst +++ b/docs/api/api_python/mindspore/mindspore.jacrev.rst @@ -3,7 +3,7 @@ mindspore.jacrev .. py:function:: mindspore.jacrev(fn, grad_position=0, has_aux=False) - 通过反向模式计算给定网络的Jacobian矩阵,对应 `反向模式自动微分 `_。当网络输出数量远小于输入数量时,使用反向模式求Jacobian矩阵比前向模式性能更好。 + 通过反向模式计算给定网络的Jacobian矩阵,对应 `反向模式自动微分 `_。当网络输出数量远小于输入数量时,使用反向模式求Jacobian矩阵比前向模式性能更好。 参数: - **fn** (Union[Cell, Function]) - 待求导的函数或网络。以Tensor为入参,返回Tensor或Tensor数组。 diff --git a/docs/api/api_python/mindspore/mindspore.jit.rst b/docs/api/api_python/mindspore/mindspore.jit.rst index 530d528aa49..e4146c28b2b 100644 --- a/docs/api/api_python/mindspore/mindspore.jit.rst +++ b/docs/api/api_python/mindspore/mindspore.jit.rst @@ -11,8 +11,8 @@ mindspore.jit - **fn** (Function) - 要编译成图的Python函数。默认值: ``None`` 。 - **mode** (str) - 使用jit的类型,可选值有 ``"PSJit"`` 和 ``"PIJit"`` 。默认值: ``"PSJit"``。 - - `PSJit `_ :MindSpore 静态图模式。 - - `PIJit `_ :MindSpore 动态图模式。 + - `PSJit `_ :MindSpore 静态图模式。 + - `PIJit `_ :MindSpore 动态图模式。 - **input_signature** (Tensor) - 用于表示输入参数的Tensor。Tensor的shape和dtype将作为函数的输入shape和dtype。默认值: ``None`` 。 - **hash_args** (Union[Object, List or Tuple of Objects]) - `fn` 里面用到的自由变量,比如外部函数或类对象,再次调用时若 `hash_args` 出现变化会触发重新编译。默认值: ``None`` 。 diff --git a/docs/api/api_python/mindspore/mindspore.jvp.rst b/docs/api/api_python/mindspore/mindspore.jvp.rst index e019d382fc3..7e834378297 100644 --- a/docs/api/api_python/mindspore/mindspore.jvp.rst +++ b/docs/api/api_python/mindspore/mindspore.jvp.rst @@ -3,7 +3,7 @@ mindspore.jvp .. py:function:: mindspore.jvp(fn, inputs, v, has_aux=False) - 计算给定网络的雅可比向量积(Jacobian-vector product, JVP)。JVP对应 `前向模式自动微分 `_。 + 计算给定网络的雅可比向量积(Jacobian-vector product, JVP)。JVP对应 `前向模式自动微分 `_。 参数: - **fn** (Union[Function, Cell]) - 待求导的函数或网络。以Tensor为入参,返回Tensor或Tensor数组。 diff --git a/docs/api/api_python/mindspore/mindspore.load.rst b/docs/api/api_python/mindspore/mindspore.load.rst index 2a95328f628..d5586fd5dd5 100644 --- a/docs/api/api_python/mindspore/mindspore.load.rst +++ b/docs/api/api_python/mindspore/mindspore.load.rst @@ -18,7 +18,7 @@ mindspore.load - 关于使用自定义解密加载的详情,请查看 `教程 `_。 - - **obf_func** (function) - 导入混淆模型所需要的函数,可以参考 `obfuscate_model() `_ 了解详情。 + - **obf_func** (function) - 导入混淆模型所需要的函数,可以参考 `obfuscate_model() `_ 了解详情。 返回: GraphCell,一个可以由 `GraphCell` 构成的可执行的编译图。 @@ -29,4 +29,4 @@ mindspore.load 教程样例: - `保存与加载 - 保存和加载MindIR - `_ \ No newline at end of file + `_ \ No newline at end of file diff --git a/docs/api/api_python/mindspore/mindspore.load_checkpoint.rst b/docs/api/api_python/mindspore/mindspore.load_checkpoint.rst index cf2221d67ca..cdb1c4d3eb5 100644 --- a/docs/api/api_python/mindspore/mindspore.load_checkpoint.rst +++ b/docs/api/api_python/mindspore/mindspore.load_checkpoint.rst @@ -30,4 +30,4 @@ mindspore.load_checkpoint - **TypeError** - `specify_prefix` 或者 `filter_prefix` 的数据类型不正确。 教程样例: - - `保存与加载 - 保存和加载模型权重 `_ + - `保存与加载 - 保存和加载模型权重 `_ diff --git a/docs/api/api_python/mindspore/mindspore.load_distributed_checkpoint.rst b/docs/api/api_python/mindspore/mindspore.load_distributed_checkpoint.rst index e5bdf0bdc7c..6906c7ba7d3 100644 --- a/docs/api/api_python/mindspore/mindspore.load_distributed_checkpoint.rst +++ b/docs/api/api_python/mindspore/mindspore.load_distributed_checkpoint.rst @@ -3,7 +3,7 @@ mindspore.load_distributed_checkpoint .. py:function:: mindspore.load_distributed_checkpoint(network, checkpoint_filenames, predict_strategy=None, train_strategy_filename=None, strict_load=False, dec_key=None, dec_mode='AES-GCM') - 给分布式预测加载checkpoint文件到网络。用于分布式推理。关于分布式推理的细节,请参考: `分布式模型加载 `_ 。 + 给分布式预测加载checkpoint文件到网络。用于分布式推理。关于分布式推理的细节,请参考: `分布式模型加载 `_ 。 参数: - **network** (Cell) - 分布式预测网络。 diff --git a/docs/api/api_python/mindspore/mindspore.load_param_into_net.rst b/docs/api/api_python/mindspore/mindspore.load_param_into_net.rst index ae8b544759f..772e7f22641 100644 --- a/docs/api/api_python/mindspore/mindspore.load_param_into_net.rst +++ b/docs/api/api_python/mindspore/mindspore.load_param_into_net.rst @@ -18,4 +18,4 @@ mindspore.load_param_into_net - **TypeError** - 如果参数不是Cell,或者 `parameter_dict` 不是Parameter类型的字典。 教程样例: - - `保存与加载 - 保存和加载模型权重 `_ \ No newline at end of file + - `保存与加载 - 保存和加载模型权重 `_ \ No newline at end of file diff --git a/docs/api/api_python/mindspore/mindspore.merge_pipeline_strategys.rst b/docs/api/api_python/mindspore/mindspore.merge_pipeline_strategys.rst index fc5d1f7f322..b273d5cdccb 100644 --- a/docs/api/api_python/mindspore/mindspore.merge_pipeline_strategys.rst +++ b/docs/api/api_python/mindspore/mindspore.merge_pipeline_strategys.rst @@ -3,7 +3,7 @@ mindspore.merge_pipeline_strategys .. py:function:: mindspore.merge_pipeline_strategys(src_strategy_dirs, dst_strategy_file) - 流水线并行模式下,汇聚所有流水线并行子图的切分策略文件。关于更多分布式Checkpoint转换的细节,请参考:`模型转换 `_。 + 流水线并行模式下,汇聚所有流水线并行子图的切分策略文件。关于更多分布式Checkpoint转换的细节,请参考:`模型转换 `_。 .. note:: src_strategy_dirs必须包含所有流水线并行的子图的切分策略文件。 diff --git a/docs/api/api_python/mindspore/mindspore.rank_list_for_transform.rst b/docs/api/api_python/mindspore/mindspore.rank_list_for_transform.rst index 6d855ed2705..8d0aaa237b4 100644 --- a/docs/api/api_python/mindspore/mindspore.rank_list_for_transform.rst +++ b/docs/api/api_python/mindspore/mindspore.rank_list_for_transform.rst @@ -3,7 +3,7 @@ mindspore.rank_list_for_transform .. py:function:: mindspore.rank_list_for_transform(rank_id, src_strategy_file=None, dst_strategy_file=None) - 在对分布式Checkpoint转换的过程中,获取为了得到目标rank的Checkpoint文件所需的源Checkpoint文件rank列表。关于更多分布式Checkpoint转换的细节,请参考:`模型转换 `_。 + 在对分布式Checkpoint转换的过程中,获取为了得到目标rank的Checkpoint文件所需的源Checkpoint文件rank列表。关于更多分布式Checkpoint转换的细节,请参考:`模型转换 `_。 参数: - **rank_id** (int) - 待转换得到的Checkpoint的rank号。 diff --git a/docs/api/api_python/mindspore/mindspore.save_checkpoint.rst b/docs/api/api_python/mindspore/mindspore.save_checkpoint.rst index 0a27f1c9b76..311c3cfc06c 100644 --- a/docs/api/api_python/mindspore/mindspore.save_checkpoint.rst +++ b/docs/api/api_python/mindspore/mindspore.save_checkpoint.rst @@ -22,4 +22,4 @@ mindspore.save_checkpoint - **TypeError** - 如果参数 `ckpt_file_name` 不是字符串类型。 教程样例: - - `保存与加载 - 保存和加载模型权重 `_ \ No newline at end of file + - `保存与加载 - 保存和加载模型权重 `_ \ No newline at end of file diff --git a/docs/api/api_python/mindspore/mindspore.set_algo_parameters.rst b/docs/api/api_python/mindspore/mindspore.set_algo_parameters.rst index b9343645ba1..14595bfce78 100644 --- a/docs/api/api_python/mindspore/mindspore.set_algo_parameters.rst +++ b/docs/api/api_python/mindspore/mindspore.set_algo_parameters.rst @@ -3,7 +3,7 @@ mindspore.set_algo_parameters .. py:function:: mindspore.set_algo_parameters(**kwargs) - 设置并行策略搜索算法中的参数。有关典型用法,请参见 `test_auto_parallel_resnet.py `_ 。 + 设置并行策略搜索算法中的参数。有关典型用法,请参见 `test_auto_parallel_resnet.py `_ 。 .. note:: 属性名称为必填项。此接口仅在AUTO_PARALLEL模式下工作。 diff --git a/docs/api/api_python/mindspore/mindspore.set_context.rst b/docs/api/api_python/mindspore/mindspore.set_context.rst index f1cb8cf8752..f8e26cfb77a 100644 --- a/docs/api/api_python/mindspore/mindspore.set_context.rst +++ b/docs/api/api_python/mindspore/mindspore.set_context.rst @@ -127,7 +127,7 @@ mindspore.set_context - **mem_Reuse**:表示内存复用功能是否打开。设置为 ``True`` 时,将打开内存复用功能。设置为 ``False`` 时,将关闭内存复用功能。 - 配置详细信息,请查看 `Running Data Recorder `_ 和 `内存复用 `_ 。 + 配置详细信息,请查看 `Running Data Recorder `_ 和 `内存复用 `_ 。 - **precompile_only** (bool) - 表示是否仅预编译网络。默认值: ``False`` 。设置为 ``True`` 时,仅编译网络,而不执行网络。 - **reserve_class_name_in_scope** (bool) - 表示是否将网络类名称保存到所属ScopeName中。默认值: ``True`` 。每个节点都有一个ScopeName。子节点的ScopeName是其父节点。如果 `reserve_class_name_in_scope` 设置为 ``True`` ,则类名将保存在ScopeName中的关键字"net-"之后。例如: @@ -138,7 +138,7 @@ mindspore.set_context - **pynative_synchronize** (bool) - 表示是否在PyNative模式下启动设备同步执行。默认值: ``False`` 。设置为 ``False`` 时,将在设备上异步执行算子。当算子执行出错时,将无法定位特定错误脚本代码的位置。当设置为 ``True`` 时,将在设备上同步执行算子。这将降低程序的执行性能。此时,当算子执行出错时,可以根据错误的调用栈来定位错误脚本代码的位置。 - **mode** (int) - 表示在GRAPH_MODE(0)或PYNATIVE_MODE(1)模式中运行,两种模式都支持所有后端。默认值: ``PYNATIVE_MODE`` 。 - - **enable_graph_kernel** (bool) - 表示开启图算融合去优化网络执行性能。默认值: ``False`` 。如果 `enable_graph_kernel` 设置为 ``True`` ,则可以启用加速。有关图算融合的详细信息,请查看 `使能图算融合 `_ 。 + - **enable_graph_kernel** (bool) - 表示开启图算融合去优化网络执行性能。默认值: ``False`` 。如果 `enable_graph_kernel` 设置为 ``True`` ,则可以启用加速。有关图算融合的详细信息,请查看 `使能图算融合 `_ 。 - **graph_kernel_flags** (str) - 图算融合的优化选项,当与enable_graph_kernel冲突时,它的优先级更高。其仅适用于有经验的用户。例如: .. code-block:: @@ -215,14 +215,14 @@ mindspore.set_context - global (dict): 设置global类的选项。 - session (dict): 设置session类的选项。 - - **parallel_speed_up_json_path** (Union[str, None]): 并行加速配置文件,配置项可以参考 `parallel_speed_up.json `_ 。 + - **parallel_speed_up_json_path** (Union[str, None]): 并行加速配置文件,配置项可以参考 `parallel_speed_up.json `_ 。 当设置为None时,表示不启用。 - **recompute_comm_overlap** (bool): 为 ``True`` 时表示开启反向重计算和通信掩盖。默认值: ``False`` 。 - **matmul_grad_comm_overlap** (bool): 为 ``True`` 时表示开启反向Matmul和通信掩盖。默认值: ``False`` 。 - **enable_task_opt** (bool): 为 ``True`` 时表示开启通信算子task数量优化。默认值: ``False`` 。 - - **enable_grad_comm_opt** (bool): 为 ``True`` 时表示开启梯度dx计算与数据并行梯度通信的掩盖,暂时不支持 `LazyInline `_ 功能下开启。默认值: ``False`` 。 - - **enable_opt_shard_comm_opt** (bool): 为 ``True`` 时表示开启正向计算与优化器并行的AllGather通信的掩盖,暂时不支持 `LazyInline `_ 功能下开启。默认值: ``False`` 。 + - **enable_grad_comm_opt** (bool): 为 ``True`` 时表示开启梯度dx计算与数据并行梯度通信的掩盖,暂时不支持 `LazyInline `_ 功能下开启。默认值: ``False`` 。 + - **enable_opt_shard_comm_opt** (bool): 为 ``True`` 时表示开启正向计算与优化器并行的AllGather通信的掩盖,暂时不支持 `LazyInline `_ 功能下开启。默认值: ``False`` 。 - **enable_concat_eliminate_opt** (bool): 为 ``True`` 时表示开启Concat消除优化,当前在开启细粒度双副本优化时有收益。默认值: ``False`` 。 - **enable_begin_end_inline_opt** (bool): 为 ``True`` 时表示开启首尾micro_batch子图的内联,用于半自动并行子图模式,流水线并行场景,一般需要和其它通信计算掩盖优化一起使用。默认值: ``False`` 。 - **compute_communicate_fusion_level** (int): 控制通算融合的级别。默认值:``0``。 diff --git a/docs/api/api_python/mindspore/mindspore.set_dump.rst b/docs/api/api_python/mindspore/mindspore.set_dump.rst index a11b8977eb5..46332d9af57 100644 --- a/docs/api/api_python/mindspore/mindspore.set_dump.rst +++ b/docs/api/api_python/mindspore/mindspore.set_dump.rst @@ -5,7 +5,7 @@ mindspore.set_dump 启用或者禁用 `target` 及其子节点的Dump数据功能。 - `target` 为 :class:`mindspore.nn.Cell` 或 :class:`mindspore.ops.Primitive` 的实例。请注意,此API仅在开启异步Dump功能且Dump配置文件中的 `dump_mode` 字段为 ``"2"`` 时生效。有关详细信息,请参阅 `Dump功能文档 `_ 。默认状态下, :class:`mindspore.nn.Cell` 和 :class:`mindspore.ops.Primitive` 实例不使能Dump数据功能。 + `target` 为 :class:`mindspore.nn.Cell` 或 :class:`mindspore.ops.Primitive` 的实例。请注意,此API仅在开启异步Dump功能且Dump配置文件中的 `dump_mode` 字段为 ``"2"`` 时生效。有关详细信息,请参阅 `Dump功能文档 `_ 。默认状态下, :class:`mindspore.nn.Cell` 和 :class:`mindspore.ops.Primitive` 实例不使能Dump数据功能。 .. warning:: 这是一个实验性API,后续可能修改或删除。在2.3版本暂不支持。 @@ -24,4 +24,4 @@ mindspore.set_dump .. note:: 运行此样例之前请设置环境变量 `MINDSPORE_DUMP_CONFIG` 到配置文件,并将配置文件中的 `dump_mode` 字段设置为2。 - 详细信息请参阅 `Dump功能文档 `_ 。 \ No newline at end of file + 详细信息请参阅 `Dump功能文档 `_ 。 \ No newline at end of file diff --git a/docs/api/api_python/mindspore/mindspore.shard.rst b/docs/api/api_python/mindspore/mindspore.shard.rst index 2b4e6ec9f19..dcdd957b2cd 100644 --- a/docs/api/api_python/mindspore/mindspore.shard.rst +++ b/docs/api/api_python/mindspore/mindspore.shard.rst @@ -38,4 +38,4 @@ mindspore.shard - **TypeError** - 如果 `level` 不是int。 教程样例: - - `函数式算子切分 `_ \ No newline at end of file + - `函数式算子切分 `_ \ No newline at end of file diff --git a/docs/api/api_python/mindspore/mindspore.transform_checkpoint_by_rank.rst b/docs/api/api_python/mindspore/mindspore.transform_checkpoint_by_rank.rst index 2eb55df6a80..eec29d556d3 100644 --- a/docs/api/api_python/mindspore/mindspore.transform_checkpoint_by_rank.rst +++ b/docs/api/api_python/mindspore/mindspore.transform_checkpoint_by_rank.rst @@ -3,7 +3,7 @@ mindspore.transform_checkpoint_by_rank .. py:function:: mindspore.transform_checkpoint_by_rank(rank_id, checkpoint_files_map, save_checkpoint_file_name, src_strategy_file=None, dst_strategy_file=None) - 将一个分布式网络的Checkpoint由源切分策略转换到目标切分策略,对特定一个rank进行转换。关于更多分布式Checkpoint转换的细节,请参考:`模型转换 `_。 + 将一个分布式网络的Checkpoint由源切分策略转换到目标切分策略,对特定一个rank进行转换。关于更多分布式Checkpoint转换的细节,请参考:`模型转换 `_。 参数: - **rank_id** (int) - 待转换得到的Checkpoint的rank号。 diff --git a/docs/api/api_python/mindspore/mindspore.transform_checkpoints.rst b/docs/api/api_python/mindspore/mindspore.transform_checkpoints.rst index 118e3975660..133a009bdc4 100644 --- a/docs/api/api_python/mindspore/mindspore.transform_checkpoints.rst +++ b/docs/api/api_python/mindspore/mindspore.transform_checkpoints.rst @@ -3,7 +3,7 @@ mindspore.transform_checkpoints .. py:function:: mindspore.transform_checkpoints(src_checkpoints_dir, dst_checkpoints_dir, ckpt_prefix, src_strategy_file=None, dst_strategy_file=None) - 将一个分布式网络的Checkpoint由源切分策略转换到目标切分策略。关于更多分布式Checkpoint转换的细节,请参考:`模型转换 `_。 + 将一个分布式网络的Checkpoint由源切分策略转换到目标切分策略。关于更多分布式Checkpoint转换的细节,请参考:`模型转换 `_。 .. note:: `src_checkpoints_dir` 目录必须组织为“src_checkpoints_dir/rank_0/a.ckpt”这样的目录结构,rank号必须作为子目录并且该rank的Checkpoint必须放置于该子目录内。如果多个文件存在于一个rank目录下,将会选名字的字典序最高的文件。 diff --git a/docs/api/api_python/mindspore/mindspore.vjp.rst b/docs/api/api_python/mindspore/mindspore.vjp.rst index 6219d540632..5100c77cacb 100644 --- a/docs/api/api_python/mindspore/mindspore.vjp.rst +++ b/docs/api/api_python/mindspore/mindspore.vjp.rst @@ -3,7 +3,7 @@ mindspore.vjp .. py:function:: mindspore.vjp(fn, inputs, weights=None, has_aux=False) - 计算给定网络的向量雅可比积(vector-jacobian-product, VJP)。VJP对应 `反向模式自动微分 `_。 + 计算给定网络的向量雅可比积(vector-jacobian-product, VJP)。VJP对应 `反向模式自动微分 `_。 参数: - **fn** (Union[Function, Cell]) - 待求导的函数或网络。以Tensor为入参,返回Tensor或Tensor数组。 diff --git a/docs/api/api_python/mindspore/mindspore.vmap.rst b/docs/api/api_python/mindspore/mindspore.vmap.rst index ff8eb709617..ca90a66bc57 100644 --- a/docs/api/api_python/mindspore/mindspore.vmap.rst +++ b/docs/api/api_python/mindspore/mindspore.vmap.rst @@ -5,7 +5,7 @@ mindspore.vmap 自动向量化(Vectorizing Map,vmap),是一种用于沿参数轴映射函数 `fn` 的高阶函数。 - Vmap由Jax率先提出,它消除了算子对batch维度的限制,并提供更加方便、统一的运算符表达。同时,用户还可以与 :func:`mindspore.grad` 等其它功能模块组合使用,提高开发效率,详情请参见教程 `自动向量化Vmap `_ 。 + Vmap由Jax率先提出,它消除了算子对batch维度的限制,并提供更加方便、统一的运算符表达。同时,用户还可以与 :func:`mindspore.grad` 等其它功能模块组合使用,提高开发效率,详情请参见教程 `自动向量化Vmap `_ 。 此外,由于自动向量化并不在函数外部执行循环,而是将循环逻辑下沉至函数的各个原语操作中,以获得更好的性能。当与图算融合特性相结合时,执行效率将进一步提高。 diff --git a/docs/api/api_python/nn/mindspore.nn.BatchNorm1d.rst b/docs/api/api_python/nn/mindspore.nn.BatchNorm1d.rst index 9981932162f..5148b517543 100644 --- a/docs/api/api_python/nn/mindspore.nn.BatchNorm1d.rst +++ b/docs/api/api_python/nn/mindspore.nn.BatchNorm1d.rst @@ -18,10 +18,10 @@ mindspore.nn.BatchNorm1d - **eps** (float) - :math:`\epsilon` 加在分母上的值,以确保数值稳定。默认值: ``1e-5`` 。 - **momentum** (float) - 动态均值和动态方差所使用的动量。默认值: ``0.9`` 。 - **affine** (bool) - bool类型。设置为 ``True`` 时,可学习到 :math:`\gamma` 和 :math:`\beta` 值。默认值: ``True`` 。 - - **gamma_init** (Union[Tensor, str, Initializer, numbers.Number]) - :math:`\gamma` 参数的初始化方法。str的值引用自函数 `mindspore.common.initializer `_ ,包括 ``'zeros'`` 、 ``'ones'`` 等。默认值: ``'ones'`` 。 - - **beta_init** (Union[Tensor, str, Initializer, numbers.Number]) - :math:`\beta` 参数的初始化方法。str的值引用自函数 `mindspore.common.initializer `_ ,包括 ``'zeros'`` 、 ``'ones'`` 等。默认值: ``'zeros'`` 。 - - **moving_mean_init** (Union[Tensor, str, Initializer, numbers.Number]) - 动态平均值的初始化方法。str的值引用自函数 `mindspore.common.initializer `_ ,包括 ``'zeros'`` 、 ``'ones'`` 等。默认值: ``'zeros'`` 。 - - **moving_var_init** (Union[Tensor, str, Initializer, numbers.Number]) - 动态方差的初始化方法。str的值引用自函数 `mindspore.common.initializer `_ ,包括 ``'zeros'`` 、 ``'ones'`` 等。默认值: ``'ones'`` 。 + - **gamma_init** (Union[Tensor, str, Initializer, numbers.Number]) - :math:`\gamma` 参数的初始化方法。str的值引用自函数 `mindspore.common.initializer `_ ,包括 ``'zeros'`` 、 ``'ones'`` 等。默认值: ``'ones'`` 。 + - **beta_init** (Union[Tensor, str, Initializer, numbers.Number]) - :math:`\beta` 参数的初始化方法。str的值引用自函数 `mindspore.common.initializer `_ ,包括 ``'zeros'`` 、 ``'ones'`` 等。默认值: ``'zeros'`` 。 + - **moving_mean_init** (Union[Tensor, str, Initializer, numbers.Number]) - 动态平均值的初始化方法。str的值引用自函数 `mindspore.common.initializer `_ ,包括 ``'zeros'`` 、 ``'ones'`` 等。默认值: ``'zeros'`` 。 + - **moving_var_init** (Union[Tensor, str, Initializer, numbers.Number]) - 动态方差的初始化方法。str的值引用自函数 `mindspore.common.initializer `_ ,包括 ``'zeros'`` 、 ``'ones'`` 等。默认值: ``'ones'`` 。 - **use_batch_statistics** (bool) - 如果为 ``True`` ,则使用当前批次数据的平均值和方差值。如果为 ``False`` ,则使用指定的平均值和方差值。如果为 ``None`` ,训练时,将使用当前批次数据的均值和方差,并更新动态均值和方差,验证过程将直接使用动态均值和方差。默认值: ``None`` 。 - **data_format** (str) - 数据格式可为 ``'NHWC'`` 或 ``'NCHW'`` 。默认值: ``'NCHW'`` 。 - **dtype** (:class:`mindspore.dtype`) - Parameters的dtype。默认值: ``mstype.float32`` 。 diff --git a/docs/api/api_python/nn/mindspore.nn.BatchNorm2d.rst b/docs/api/api_python/nn/mindspore.nn.BatchNorm2d.rst index 0d8311716a0..789cd8b4ee2 100644 --- a/docs/api/api_python/nn/mindspore.nn.BatchNorm2d.rst +++ b/docs/api/api_python/nn/mindspore.nn.BatchNorm2d.rst @@ -24,10 +24,10 @@ mindspore.nn.BatchNorm2d - **eps** (float) - :math:`\epsilon` 加在分母上的值,以确保数值稳定。默认值: ``1e-5`` 。 - **momentum** (float) - 动态均值和动态方差所使用的动量。默认值: ``0.9`` 。 - **affine** (bool) - bool类型。设置为 ``True`` 时,可学习 :math:`\gamma` 和 :math:`\beta` 值。默认值: ``True`` 。 - - **gamma_init** (Union[Tensor, str, Initializer, numbers.Number]) - :math:`\gamma` 参数的初始化方法。str的值引用自函数 `mindspore.common.initializer `_ ,包括 ``'zeros'`` 、 ``'ones'`` 等。默认值: ``'ones'`` 。 - - **beta_init** (Union[Tensor, str, Initializer, numbers.Number]) - :math:`\beta` 参数的初始化方法。str的值引用自函数 `mindspore.common.initializer `_ ,包括 ``'zeros'`` 、 ``'ones'`` 等。默认值: ``'zeros'`` 。 - - **moving_mean_init** (Union[Tensor, str, Initializer, numbers.Number]) - 动态平均值的初始化方法。str的值引用自函数 `mindspore.common.initializer `_ ,包括 ``'zeros'`` 、 ``'ones'`` 等。默认值: ``'zeros'`` 。 - - **moving_var_init** (Union[Tensor, str, Initializer, numbers.Number]) - 动态方差的初始化方法。str的值引用自函数 `mindspore.common.initializer `_ ,包括 ``'zeros'`` 、 ``'ones'`` 等。默认值: ``'ones'`` 。 + - **gamma_init** (Union[Tensor, str, Initializer, numbers.Number]) - :math:`\gamma` 参数的初始化方法。str的值引用自函数 `mindspore.common.initializer `_ ,包括 ``'zeros'`` 、 ``'ones'`` 等。默认值: ``'ones'`` 。 + - **beta_init** (Union[Tensor, str, Initializer, numbers.Number]) - :math:`\beta` 参数的初始化方法。str的值引用自函数 `mindspore.common.initializer `_ ,包括 ``'zeros'`` 、 ``'ones'`` 等。默认值: ``'zeros'`` 。 + - **moving_mean_init** (Union[Tensor, str, Initializer, numbers.Number]) - 动态平均值的初始化方法。str的值引用自函数 `mindspore.common.initializer `_ ,包括 ``'zeros'`` 、 ``'ones'`` 等。默认值: ``'zeros'`` 。 + - **moving_var_init** (Union[Tensor, str, Initializer, numbers.Number]) - 动态方差的初始化方法。str的值引用自函数 `mindspore.common.initializer `_ ,包括 ``'zeros'`` 、 ``'ones'`` 等。默认值: ``'ones'`` 。 - **use_batch_statistics** (bool) - 默认值: ``None`` 。 - 如果为 ``True`` ,则使用当前批处理数据的平均值和方差值,并跟踪运行平均值和运行方差。 diff --git a/docs/api/api_python/nn/mindspore.nn.BatchNorm3d.rst b/docs/api/api_python/nn/mindspore.nn.BatchNorm3d.rst index 7ea98ab038b..2ab12d7f224 100644 --- a/docs/api/api_python/nn/mindspore.nn.BatchNorm3d.rst +++ b/docs/api/api_python/nn/mindspore.nn.BatchNorm3d.rst @@ -20,10 +20,10 @@ mindspore.nn.BatchNorm3d - **eps** (float) - 加在分母上的值,以确保数值稳定。默认值: ``1e-5`` 。 - **momentum** (float) - 动态均值和动态方差所使用的动量。默认值: ``0.9`` 。 - **affine** (bool) - bool类型。设置为 ``True`` 时,可以学习gama和beta。默认值: ``True`` 。 - - **gamma_init** (Union[Tensor, str, Initializer, numbers.Number]) - gamma参数的初始化方法。str的值引用自函数 `mindspore.common.initializer `_ ,包括 ``'zeros'`` 、 ``'ones'`` 等。默认值: ``'ones'`` 。 - - **beta_init** (Union[Tensor, str, Initializer, numbers.Number]) - beta参数的初始化方法。str的值引用自函数 `mindspore.common.initializer `_ ,包括 ``'zeros'`` 、 ``'ones'`` 等。默认值: ``'zeros'`` 。 - - **moving_mean_init** (Union[Tensor, str, Initializer, numbers.Number]) - 动态均值和动态方差所使用的动量。平均值的初始化方法。str的值引用自函数 `mindspore.common.initializer `_ ,包括 ``'zeros'`` 、 ``'ones'`` 等。默认值:``'zeros'`` 。 - - **moving_var_init** (Union[Tensor, str, Initializer, numbers.Number]) - 动态均值和动态方差所使用的动量。方差的初始化方法。str的值引用自函数 `mindspore.common.initializer `_ ,包括 ``'zeros'`` 、 ``'ones'`` 等。默认值: ``'ones'`` 。 + - **gamma_init** (Union[Tensor, str, Initializer, numbers.Number]) - gamma参数的初始化方法。str的值引用自函数 `mindspore.common.initializer `_ ,包括 ``'zeros'`` 、 ``'ones'`` 等。默认值: ``'ones'`` 。 + - **beta_init** (Union[Tensor, str, Initializer, numbers.Number]) - beta参数的初始化方法。str的值引用自函数 `mindspore.common.initializer `_ ,包括 ``'zeros'`` 、 ``'ones'`` 等。默认值: ``'zeros'`` 。 + - **moving_mean_init** (Union[Tensor, str, Initializer, numbers.Number]) - 动态均值和动态方差所使用的动量。平均值的初始化方法。str的值引用自函数 `mindspore.common.initializer `_ ,包括 ``'zeros'`` 、 ``'ones'`` 等。默认值:``'zeros'`` 。 + - **moving_var_init** (Union[Tensor, str, Initializer, numbers.Number]) - 动态均值和动态方差所使用的动量。方差的初始化方法。str的值引用自函数 `mindspore.common.initializer `_ ,包括 ``'zeros'`` 、 ``'ones'`` 等。默认值: ``'ones'`` 。 - **use_batch_statistics** (bool) - 如果为 ``True`` ,则使用当前批次数据的平均值和方差值。如果为 ``False`` ,则使用指定的平均值和方差值。如果为 ``None`` ,训练时,将使用当前批次数据的均值和方差,并更新动态均值和方差,验证过程将直接使用动态均值和方差。默认值: ``None`` 。 - **dtype** (:class:`mindspore.dtype`) - Parameters的dtype。默认值: ``mstype.float32`` 。 diff --git a/docs/api/api_python/nn/mindspore.nn.Cell.rst b/docs/api/api_python/nn/mindspore.nn.Cell.rst index 9007094c9fb..46ae49d8c7c 100644 --- a/docs/api/api_python/nn/mindspore.nn.Cell.rst +++ b/docs/api/api_python/nn/mindspore.nn.Cell.rst @@ -60,7 +60,7 @@ 教程样例: - `Cell与参数 - 自定义Cell反向 - `_ + `_ .. py:method:: cast_inputs(inputs, dst_type) @@ -298,7 +298,7 @@ 迭代器,Cell的名称和Cell本身。 教程样例: - - `网络构建 - 模型参数 `_ + - `网络构建 - 模型参数 `_ .. py:method:: parameters_broadcast_dict(recurse=True) @@ -433,7 +433,7 @@ 为了提升网络性能,可以配置boost内的算法让框架自动使能该算法来加速网络训练。 请确保 `boost_type` 所选择的算法在 - `algorithm library `_ 算法库中。 + `algorithm library `_ 算法库中。 .. note:: 部分加速算法可能影响网络精度,请谨慎选择。 @@ -526,7 +526,7 @@ Cell类型,Cell本身。 教程样例: - - `模型训练 - 训练与评估实现 `_ + - `模型训练 - 训练与评估实现 `_ .. py:method:: shard(in_strategy, out_strategy=None, parameter_plan=None, device="Ascend", level=0) @@ -577,7 +577,7 @@ List类型,可训练参数列表。 教程样例: - - `模型训练 - 优化器 `_ + - `模型训练 - 优化器 `_ .. py:method:: untrainable_params(recurse=True) diff --git a/docs/api/api_python/nn/mindspore.nn.CellList.rst b/docs/api/api_python/nn/mindspore.nn.CellList.rst index 587bcda76dd..bcf0adb31f3 100644 --- a/docs/api/api_python/nn/mindspore.nn.CellList.rst +++ b/docs/api/api_python/nn/mindspore.nn.CellList.rst @@ -3,7 +3,7 @@ mindspore.nn.CellList .. py:class:: mindspore.nn.CellList(*args, **kwargs) - 构造Cell列表。关于Cell的介绍,可参考 `Cell `_。 + 构造Cell列表。关于Cell的介绍,可参考 `Cell `_。 CellList可以像普通Python列表一样使用,其包含的Cell均已初始化,其包含的Cell的类型不能为CellDict。 diff --git a/docs/api/api_python/nn/mindspore.nn.Conv1d.rst b/docs/api/api_python/nn/mindspore.nn.Conv1d.rst index 92b811fcacb..62a1cd27324 100644 --- a/docs/api/api_python/nn/mindspore.nn.Conv1d.rst +++ b/docs/api/api_python/nn/mindspore.nn.Conv1d.rst @@ -50,8 +50,8 @@ mindspore.nn.Conv1d 假设 :math:`dilation=(d0,)`, 则卷积核在宽度方向间隔 :math:`d0-1` 个元素进行采样。取值范围为[1, L]。默认值: ``1`` 。 - **group** (int,可选) - 将过滤器拆分为组, `in_channels` 和 `out_channels` 必须可被 `group` 整除。默认值:``1`` 。 - **has_bias** (bool,可选) - Conv1d层是否添加偏置参数。默认值: ``False`` 。 - - **weight_init** (Union[Tensor, str, Initializer, numbers.Number],可选) - 权重参数的初始化方法。它可以是Tensor,str,Initializer或numbers.Number。当使用str时,可选 ``"TruncatedNormal"`` , ``"Normal"`` , ``"Uniform"`` , ``"HeUniform"`` 和 ``"XavierUniform"`` 分布以及常量 ``"One"`` 和 ``"Zero"`` 分布的值,可接受别名 ``"xavier_uniform"`` , ``"he_uniform"`` , ``"ones"`` 和 ``"zeros"`` 。上述字符串大小写均可。更多细节请参考 `Initializer `_, 的值。默认值: ``None`` ,权重使用 ``"HeUniform"`` 初始化。 - - **bias_init** (Union[Tensor, str, Initializer, numbers.Number],可选) - 偏置参数的初始化方法。可以使用的初始化方法与 `weight_init` 相同。更多细节请参考 `Initializer `_, 的值。默认值: ``None`` ,偏差使用 ``"Uniform"`` 初始化。 + - **weight_init** (Union[Tensor, str, Initializer, numbers.Number],可选) - 权重参数的初始化方法。它可以是Tensor,str,Initializer或numbers.Number。当使用str时,可选 ``"TruncatedNormal"`` , ``"Normal"`` , ``"Uniform"`` , ``"HeUniform"`` 和 ``"XavierUniform"`` 分布以及常量 ``"One"`` 和 ``"Zero"`` 分布的值,可接受别名 ``"xavier_uniform"`` , ``"he_uniform"`` , ``"ones"`` 和 ``"zeros"`` 。上述字符串大小写均可。更多细节请参考 `Initializer `_, 的值。默认值: ``None`` ,权重使用 ``"HeUniform"`` 初始化。 + - **bias_init** (Union[Tensor, str, Initializer, numbers.Number],可选) - 偏置参数的初始化方法。可以使用的初始化方法与 `weight_init` 相同。更多细节请参考 `Initializer `_, 的值。默认值: ``None`` ,偏差使用 ``"Uniform"`` 初始化。 - **dtype** (:class:`mindspore.dtype`) - Parameters的dtype。默认值: ``mstype.float32`` 。 输入: diff --git a/docs/api/api_python/nn/mindspore.nn.Conv2d.rst b/docs/api/api_python/nn/mindspore.nn.Conv2d.rst index 758b039cd0b..efd230a832a 100644 --- a/docs/api/api_python/nn/mindspore.nn.Conv2d.rst +++ b/docs/api/api_python/nn/mindspore.nn.Conv2d.rst @@ -51,8 +51,8 @@ mindspore.nn.Conv2d 假设 :math:`dilation=(d0, d1)`, 则卷积核在高度方向间隔 :math:`d0-1` 个元素进行采样,在宽度方向间隔 :math:`d1-1` 个元素进行采样。高度和宽度上取值范围分别为[1, H]和[1, W]。默认值: ``1`` 。 - **group** (int,可选) - 将过滤器拆分为组, `in_channels` 和 `out_channels` 必须可被 `group` 整除。如果组数等于 `in_channels` 和 `out_channels` ,这个二维卷积层也被称为二维深度卷积层。默认值: ``1`` 。 - **has_bias** (bool,可选) - Conv2d层是否添加偏置参数。默认值: ``False`` 。 - - **weight_init** (Union[Tensor, str, Initializer, numbers.Number],可选) - 权重参数的初始化方法。它可以是Tensor,str,Initializer或numbers.Number。当使用str时,可选 ``"TruncatedNormal"`` , ``"Normal"`` , ``"Uniform"`` , ``"HeUniform"`` 和 ``"XavierUniform"`` 分布以及常量 ``"One"`` 和 ``"Zero"`` 分布的值,可接受别名 ``"xavier_uniform"`` , ``"he_uniform"`` , ``"ones"`` 和 ``"zeros"`` 。上述字符串大小写均可。更多细节请参考 `Initializer `_, 的值。默认值: ``None`` ,权重使用 ``"HeUniform"`` 初始化。 - - **bias_init** (Union[Tensor, str, Initializer, numbers.Number],可选) - 偏置参数的初始化方法。可以使用的初始化方法与 `weight_init` 相同。更多细节请参考 `Initializer `_ 的值。默认值: ``None`` ,偏差使用 ``"Uniform"`` 初始化。 + - **weight_init** (Union[Tensor, str, Initializer, numbers.Number],可选) - 权重参数的初始化方法。它可以是Tensor,str,Initializer或numbers.Number。当使用str时,可选 ``"TruncatedNormal"`` , ``"Normal"`` , ``"Uniform"`` , ``"HeUniform"`` 和 ``"XavierUniform"`` 分布以及常量 ``"One"`` 和 ``"Zero"`` 分布的值,可接受别名 ``"xavier_uniform"`` , ``"he_uniform"`` , ``"ones"`` 和 ``"zeros"`` 。上述字符串大小写均可。更多细节请参考 `Initializer `_, 的值。默认值: ``None`` ,权重使用 ``"HeUniform"`` 初始化。 + - **bias_init** (Union[Tensor, str, Initializer, numbers.Number],可选) - 偏置参数的初始化方法。可以使用的初始化方法与 `weight_init` 相同。更多细节请参考 `Initializer `_ 的值。默认值: ``None`` ,偏差使用 ``"Uniform"`` 初始化。 - **data_format** (str,可选) - 数据格式的可选值有 ``"NHWC"`` , ``"NCHW"`` 。默认值: ``"NCHW"`` 。(目前仅GPU支持NHWC。) - **dtype** (:class:`mindspore.dtype`) - Parameters的dtype。默认值: ``mstype.float32`` 。 diff --git a/docs/api/api_python/nn/mindspore.nn.Conv3d.rst b/docs/api/api_python/nn/mindspore.nn.Conv3d.rst index 42b87d1f5f8..1e85921e60b 100644 --- a/docs/api/api_python/nn/mindspore.nn.Conv3d.rst +++ b/docs/api/api_python/nn/mindspore.nn.Conv3d.rst @@ -52,8 +52,8 @@ mindspore.nn.Conv3d 假设 :math:`dilation=(d0, d1, d2)`,则卷积核在深度方向间隔 :math:`d0-1` 个元素进行采样,在高度方向间隔 :math:`d1-1` 个元素进行采样,在高度方向间隔 :math:`d2-1` 个元素进行采样。深度、高度和宽度上取值范围分别为[1, D]、[1, H]和[1, W]。默认值: ``1`` 。 - **group** (int,可选) - 将过滤器拆分为组, `in_channels` 和 `out_channels` 必须可被 `group` 整除。默认值: ``1`` 。 - **has_bias** (bool,可选) - Conv3d层是否添加偏置参数。默认值: ``False`` 。 - - **weight_init** (Union[Tensor, str, Initializer, numbers.Number],可选) - 权重参数的初始化方法。它可以是Tensor,str,Initializer或numbers.Number。当使用str时,可选 ``"TruncatedNormal"`` , ``"Normal"`` , ``"Uniform"`` , ``"HeUniform"`` 和 ``"XavierUniform"`` 分布以及常量 ``"One"`` 和 ``"Zero"`` 分布的值,可接受别名 ``"xavier_uniform"`` , ``"he_uniform"`` , ``"ones"`` 和 ``"zeros"`` 。上述字符串大小写均可。更多细节请参考 `Initializer `_, 的值。默认值: ``None`` ,权重使用 ``"HeUniform"`` 初始化。 - - **bias_init** (Union[Tensor, str, Initializer, numbers.Number],可选) - 偏置参数的初始化方法。可以使用的初始化方法与 `weight_init` 相同。更多细节请参考 `Initializer `_, 的值。默认值: ``None`` ,偏差使用 ``"Uniform"`` 初始化。 + - **weight_init** (Union[Tensor, str, Initializer, numbers.Number],可选) - 权重参数的初始化方法。它可以是Tensor,str,Initializer或numbers.Number。当使用str时,可选 ``"TruncatedNormal"`` , ``"Normal"`` , ``"Uniform"`` , ``"HeUniform"`` 和 ``"XavierUniform"`` 分布以及常量 ``"One"`` 和 ``"Zero"`` 分布的值,可接受别名 ``"xavier_uniform"`` , ``"he_uniform"`` , ``"ones"`` 和 ``"zeros"`` 。上述字符串大小写均可。更多细节请参考 `Initializer `_, 的值。默认值: ``None`` ,权重使用 ``"HeUniform"`` 初始化。 + - **bias_init** (Union[Tensor, str, Initializer, numbers.Number],可选) - 偏置参数的初始化方法。可以使用的初始化方法与 `weight_init` 相同。更多细节请参考 `Initializer `_, 的值。默认值: ``None`` ,偏差使用 ``"Uniform"`` 初始化。 - **data_format** (str,可选) - 数据格式的可选值。目前仅支持 ``'NCDHW'`` 。 - **dtype** (:class:`mindspore.dtype`) - Parameters的dtype。默认值: ``mstype.float32`` 。 diff --git a/docs/api/api_python/nn/mindspore.nn.Embedding.rst b/docs/api/api_python/nn/mindspore.nn.Embedding.rst index 25f05f9eae0..376ed926583 100644 --- a/docs/api/api_python/nn/mindspore.nn.Embedding.rst +++ b/docs/api/api_python/nn/mindspore.nn.Embedding.rst @@ -14,7 +14,7 @@ mindspore.nn.Embedding - **vocab_size** (int) - 词典的大小。 - **embedding_size** (int) - 每个嵌入向量的大小。 - **use_one_hot** (bool) - 指定是否使用one-hot形式。默认值: ``False`` 。 - - **embedding_table** (Union[Tensor, str, Initializer, numbers.Number]) - embedding_table的初始化方法。当指定为字符串,字符串取值请参见类 `mindspore.common.initializer `_ 。默认值: ``"normal"`` 。 + - **embedding_table** (Union[Tensor, str, Initializer, numbers.Number]) - embedding_table的初始化方法。当指定为字符串,字符串取值请参见类 `mindspore.common.initializer `_ 。默认值: ``"normal"`` 。 - **dtype** (mindspore.dtype) - x的数据类型。默认值: ``mstype.float32`` 。 - **padding_idx** (int, None) - 将 `padding_idx` 对应索引所输出的嵌入向量用零填充。默认值: ``None`` 。该功能已停用。 diff --git a/docs/api/api_python/nn/mindspore.nn.LazyAdam.rst b/docs/api/api_python/nn/mindspore.nn.LazyAdam.rst index 9feaed9e1d3..615db9aedb8 100644 --- a/docs/api/api_python/nn/mindspore.nn.LazyAdam.rst +++ b/docs/api/api_python/nn/mindspore.nn.LazyAdam.rst @@ -58,7 +58,7 @@ mindspore.nn.LazyAdam Tensor[bool],值为 ``True`` 。 异常: - - **TypeError** - `learning_rate` 不是int、float、Tensor、Iterable或 `LearningRateSchedule `_ 。 + - **TypeError** - `learning_rate` 不是int、float、Tensor、Iterable或 `LearningRateSchedule `_ 。 - **TypeError** - `parameters` 的元素不是Parameter或字典。 - **TypeError** - `beta1`、`beta2`、`eps` 或 `loss_scale` 不是float。 - **TypeError** - `weight_decay` 不是float或int。 diff --git a/docs/api/api_python/nn/mindspore.nn.Optimizer.rst b/docs/api/api_python/nn/mindspore.nn.Optimizer.rst index fde2e7a796f..74e2f134628 100644 --- a/docs/api/api_python/nn/mindspore.nn.Optimizer.rst +++ b/docs/api/api_python/nn/mindspore.nn.Optimizer.rst @@ -3,7 +3,7 @@ mindspore.nn.Optimizer .. py:class:: mindspore.nn.Optimizer(learning_rate, parameters, weight_decay=0.0, loss_scale=1.0) - 用于参数更新的优化器基类。不要直接使用这个类,请实例化它的一个子类。详见 `优化器 `_ 。 + 用于参数更新的优化器基类。不要直接使用这个类,请实例化它的一个子类。详见 `优化器 `_ 。 优化器支持参数分组。当参数分组时,每组参数均可配置不同的学习率(`lr` )、权重衰减(`weight_decay`)和梯度中心化(`grad_centralization`)策略。 diff --git a/docs/api/api_python/nn/mindspore.nn.SequentialCell.rst b/docs/api/api_python/nn/mindspore.nn.SequentialCell.rst index f1f47b94882..db359c94266 100644 --- a/docs/api/api_python/nn/mindspore.nn.SequentialCell.rst +++ b/docs/api/api_python/nn/mindspore.nn.SequentialCell.rst @@ -3,7 +3,7 @@ mindspore.nn.SequentialCell .. py:class:: mindspore.nn.SequentialCell(*args) - 构造Cell顺序容器。关于Cell的介绍,可参考 `Cell `_。 + 构造Cell顺序容器。关于Cell的介绍,可参考 `Cell `_。 SequentialCell将按照传入List的顺序依次将Cell添加。此外,也支持OrderedDict作为构造器传入。 diff --git a/docs/api/api_python/nn/mindspore.nn.optim_arg_dynamic_lr.rst b/docs/api/api_python/nn/mindspore.nn.optim_arg_dynamic_lr.rst index a65adbd8ff3..08fe00dad63 100644 --- a/docs/api/api_python/nn/mindspore.nn.optim_arg_dynamic_lr.rst +++ b/docs/api/api_python/nn/mindspore.nn.optim_arg_dynamic_lr.rst @@ -2,4 +2,4 @@ - **int** - �̶���ѧϰ�ʡ�������ڵ����㡣�������ͻᱻת��Ϊ�������� - **Tensor** - �����DZ�����һά�����������ǹ̶���ѧϰ�ʡ�һά�����Ƕ�̬��ѧϰ�ʣ���i����ȡ�����е�i��ֵ��Ϊѧϰ�ʡ� - **Iterable** - ��̬��ѧϰ�ʡ���i����ȡ��������i��ֵ��Ϊѧϰ�ʡ� -- **LearningRateSchedule** - ��̬��ѧϰ�ʡ���ѵ�������У��Ż�����ʹ�ò�����mindspore.cn/docs/zh-CN/r2.3.q1ateSchedule `_ ʵ�������㵱ǰѧϰ�ʡ� \ No newline at end of file +- **LearningRateSchedule** - ��̬��ѧϰ�ʡ���ѵ�������У��Ż�����ʹ�ò�����mindspore.cn/docs/zh-CN/masterateSchedule `_ ʵ�������㵱ǰѧϰ�ʡ� \ No newline at end of file diff --git a/docs/api/api_python/nn/mindspore.nn.optim_note_loss_scale.rst b/docs/api/api_python/nn/mindspore.nn.optim_note_loss_scale.rst index 50942b59412..e3f27dd7bcb 100644 --- a/docs/api/api_python/nn/mindspore.nn.optim_note_loss_scale.rst +++ b/docs/api/api_python/nn/mindspore.nn.optim_note_loss_scale.rst @@ -2,4 +2,4 @@ 由于此优化器没有 `loss_scale` 的参数,因此需要通过其他方式处理 `loss_scale` 。 -如何正确处理 `loss_scale` 详见 `LossScale `_。 +如何正确处理 `loss_scale` 详见 `LossScale `_。 diff --git a/docs/api/api_python/ops/mindspore.ops.Add.rst b/docs/api/api_python/ops/mindspore.ops.Add.rst index 0aaa9edff5d..1c645046095 100644 --- a/docs/api/api_python/ops/mindspore.ops.Add.rst +++ b/docs/api/api_python/ops/mindspore.ops.Add.rst @@ -14,8 +14,8 @@ mindspore.ops.Add - 当输入为Tensor的时候,维度应大于或等于1。 输入: - - **x** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 - - **y** (Union[Tensor, number.Number, bool]) - 第二个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 + - **x** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 + - **y** (Union[Tensor, number.Number, bool]) - 第二个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 输出: Tensor,shape与输入 `x`、 `y` 广播后的shape相同,数据类型为两个输入中精度较高的类型。 diff --git a/docs/api/api_python/ops/mindspore.ops.AllGather.rst b/docs/api/api_python/ops/mindspore.ops.AllGather.rst index 9ab0e7b499f..211f4b54fa4 100644 --- a/docs/api/api_python/ops/mindspore.ops.AllGather.rst +++ b/docs/api/api_python/ops/mindspore.ops.AllGather.rst @@ -31,4 +31,4 @@ 教程样例: - `分布式集合通信原语 - AllGather - `_ \ No newline at end of file + `_ \ No newline at end of file diff --git a/docs/api/api_python/ops/mindspore.ops.AllReduce.rst b/docs/api/api_python/ops/mindspore.ops.AllReduce.rst index f10e523709c..1f6f49c3bea 100644 --- a/docs/api/api_python/ops/mindspore.ops.AllReduce.rst +++ b/docs/api/api_python/ops/mindspore.ops.AllReduce.rst @@ -30,4 +30,4 @@ 教程样例: - `分布式集合通信原语 - AllReduce - `_ + `_ diff --git a/docs/api/api_python/ops/mindspore.ops.AlltoAll.rst b/docs/api/api_python/ops/mindspore.ops.AlltoAll.rst index f9d8b020c82..2ad38ca1f0a 100644 --- a/docs/api/api_python/ops/mindspore.ops.AlltoAll.rst +++ b/docs/api/api_python/ops/mindspore.ops.AlltoAll.rst @@ -13,7 +13,7 @@ mindspore.ops.AlltoAll .. note:: 聚合阶段,所有进程中的Tensor必须具有相同的shape和格式。 - 要求全连接配网方式,每台设备具有相同的vlan id,ip和mask在同一子网,请查看 `详细信息 `_ 。 + 要求全连接配网方式,每台设备具有相同的vlan id,ip和mask在同一子网,请查看 `详细信息 `_ 。 参数: - **split_count** (int) - 在每个进程上,将块(blocks)拆分为 `split_count` 个。 @@ -43,4 +43,4 @@ mindspore.ops.AlltoAll 教程样例: - `分布式集合通信原语 - AlltoAll - `_ \ No newline at end of file + `_ \ No newline at end of file diff --git a/docs/api/api_python/ops/mindspore.ops.Broadcast.rst b/docs/api/api_python/ops/mindspore.ops.Broadcast.rst index 77f4b776a16..914f418282e 100644 --- a/docs/api/api_python/ops/mindspore.ops.Broadcast.rst +++ b/docs/api/api_python/ops/mindspore.ops.Broadcast.rst @@ -30,4 +30,4 @@ 教程样例: - `分布式集合通信原语 - Broadcast - `_ \ No newline at end of file + `_ \ No newline at end of file diff --git a/docs/api/api_python/ops/mindspore.ops.Custom.rst b/docs/api/api_python/ops/mindspore.ops.Custom.rst index dfb56db8ce9..efb09afd7a9 100644 --- a/docs/api/api_python/ops/mindspore.ops.Custom.rst +++ b/docs/api/api_python/ops/mindspore.ops.Custom.rst @@ -5,7 +5,7 @@ mindspore.ops.Custom `Custom` 算子是MindSpore自定义算子的统一接口。用户可以利用该接口自行定义MindSpore内置算子库尚未包含的算子。 根据输入函数的不同,你可以创建多个自定义算子,并且把它们用在神经网络中。 - 关于自定义算子的详细说明和介绍,包括参数的正确书写,见 `自定义算子教程 `_ 。 + 关于自定义算子的详细说明和介绍,包括参数的正确书写,见 `自定义算子教程 `_ 。 .. warning:: - 这是一个实验性API,后续可能修改或删除。 diff --git a/docs/api/api_python/ops/mindspore.ops.CustomRegOp.rst b/docs/api/api_python/ops/mindspore.ops.CustomRegOp.rst index 743772f1ce7..8de316d5007 100644 --- a/docs/api/api_python/ops/mindspore.ops.CustomRegOp.rst +++ b/docs/api/api_python/ops/mindspore.ops.CustomRegOp.rst @@ -41,7 +41,7 @@ mindspore.ops.CustomRegOp 教程样例: - `自定义算子(基于Custom表达) - aicpu类型的自定义算子开发 - `_ + `_ .. py:method:: dtype_format(*args) @@ -55,7 +55,7 @@ mindspore.ops.CustomRegOp 教程样例: - `自定义算子(基于Custom表达) - aicpu类型的自定义算子开发 - `_ + `_ .. py:method:: get_op_info() @@ -63,7 +63,7 @@ mindspore.ops.CustomRegOp 教程样例: - `自定义算子(基于Custom表达) - aicpu类型的自定义算子开发 - `_ + `_ .. py:method:: input(index=None, name=None, param_type="required", **kwargs) @@ -87,7 +87,7 @@ mindspore.ops.CustomRegOp 教程样例: - `自定义算子(基于Custom表达) - aicpu类型的自定义算子开发 - `_ + `_ .. py:method:: output(index=None, name=None, param_type="required", **kwargs) @@ -111,7 +111,7 @@ mindspore.ops.CustomRegOp 教程样例: - `自定义算子(基于Custom表达) - aicpu类型的自定义算子开发 - `_ + `_ .. py:method:: target(target=None) @@ -125,4 +125,4 @@ mindspore.ops.CustomRegOp 教程样例: - `自定义算子(基于Custom表达) - aicpu类型的自定义算子开发 - `_ \ No newline at end of file + `_ \ No newline at end of file diff --git a/docs/api/api_python/ops/mindspore.ops.Div.rst b/docs/api/api_python/ops/mindspore.ops.Div.rst index 6d63ed9c695..39cb4eb4d88 100644 --- a/docs/api/api_python/ops/mindspore.ops.Div.rst +++ b/docs/api/api_python/ops/mindspore.ops.Div.rst @@ -13,8 +13,8 @@ mindspore.ops.Div - 两个输入遵循隐式类型转换规则,使数据类型保持一致。 输入: - - **x** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 - - **y** (Union[Tensor, number.Number, bool]) - 第二个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 + - **x** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 + - **y** (Union[Tensor, number.Number, bool]) - 第二个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 输出: Tensor,shape与输入 `x`,`y` 广播后的shape相同,数据类型为两个输入中精度较高的类型。 diff --git a/docs/api/api_python/ops/mindspore.ops.DivNoNan.rst b/docs/api/api_python/ops/mindspore.ops.DivNoNan.rst index c3dd17914e0..833cfc3780e 100644 --- a/docs/api/api_python/ops/mindspore.ops.DivNoNan.rst +++ b/docs/api/api_python/ops/mindspore.ops.DivNoNan.rst @@ -17,7 +17,7 @@ mindspore.ops.DivNoNan \end{cases} 输入: - - **x1** (Union[Tensor, number.Number, bool]) - 第一个输入是number.Number、bool或者Tensor,数据类型为 `number `_ 。 + - **x1** (Union[Tensor, number.Number, bool]) - 第一个输入是number.Number、bool或者Tensor,数据类型为 `number `_ 。 - **x2** (Union[Tensor, number.Number, bool]) - 当第一个输入是bool或数据类型为number或bool\_的Tensor时,第二个输入是number.Number或bool。当第一个输入是Scalar时,第二个输入必须是数据类型为number或bool\_的Tensor。 输出: diff --git a/docs/api/api_python/ops/mindspore.ops.LessEqual.rst b/docs/api/api_python/ops/mindspore.ops.LessEqual.rst index 30145441f12..a3841badca6 100644 --- a/docs/api/api_python/ops/mindspore.ops.LessEqual.rst +++ b/docs/api/api_python/ops/mindspore.ops.LessEqual.rst @@ -8,8 +8,8 @@ mindspore.ops.LessEqual 更多参考详见 :func:`mindspore.ops.le`。 输入: - - **x** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 - - **y** (Union[Tensor, number.Number, bool]) - 第二个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 + - **x** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 + - **y** (Union[Tensor, number.Number, bool]) - 第二个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 输出: Tensor,shape与广播后的shape相同,数据类型为bool。 diff --git a/docs/api/api_python/ops/mindspore.ops.Mul.rst b/docs/api/api_python/ops/mindspore.ops.Mul.rst index 4eaa6907406..6f3e424ae2e 100644 --- a/docs/api/api_python/ops/mindspore.ops.Mul.rst +++ b/docs/api/api_python/ops/mindspore.ops.Mul.rst @@ -13,8 +13,8 @@ mindspore.ops.Mul - 两个输入遵循隐式类型转换规则,使数据类型保持一致。 输入: - - **x** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 - - **y** (Union[Tensor, number.Number, bool]) - 第二个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 + - **x** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 + - **y** (Union[Tensor, number.Number, bool]) - 第二个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 输出: Tensor,shape与广播后的shape相同,数据类型为两个输入中精度较高的类型。 diff --git a/docs/api/api_python/ops/mindspore.ops.NeighborExchangeV2.rst b/docs/api/api_python/ops/mindspore.ops.NeighborExchangeV2.rst index 2aadf7bfbe6..f892f58f201 100644 --- a/docs/api/api_python/ops/mindspore.ops.NeighborExchangeV2.rst +++ b/docs/api/api_python/ops/mindspore.ops.NeighborExchangeV2.rst @@ -8,7 +8,7 @@ mindspore.ops.NeighborExchangeV2 将数据从本地rank发送到 `send_rank_ids` 中指定的rank,同时从 `recv_rank_ids` 接收数据。请参考下方教程样例了解具体的数据是如何在相邻设备间交换的。 .. note:: - 要求全连接配网,每台设备具有相同的vlan id,ip和mask在同一子网,请查看 `分布式集合通信原语注意事项 `_ 。 + 要求全连接配网,每台设备具有相同的vlan id,ip和mask在同一子网,请查看 `分布式集合通信原语注意事项 `_ 。 参数: - **send_rank_ids** (list(int)) - 指定发送数据的rank。8个rank_id分别代表8个方向上的数据要向哪个rank发送,如果某个方向上不发送数据,则设为-1。 @@ -38,4 +38,4 @@ mindspore.ops.NeighborExchangeV2 教程样例: - `分布式集合通信原语 - NeighborExchangeV2 - `_ \ No newline at end of file + `_ \ No newline at end of file diff --git a/docs/api/api_python/ops/mindspore.ops.PrimitiveWithCheck.rst b/docs/api/api_python/ops/mindspore.ops.PrimitiveWithCheck.rst index 0642541f9d7..4aa0a182658 100644 --- a/docs/api/api_python/ops/mindspore.ops.PrimitiveWithCheck.rst +++ b/docs/api/api_python/ops/mindspore.ops.PrimitiveWithCheck.rst @@ -9,7 +9,7 @@ mindspore.ops.PrimitiveWithCheck 如果未定义__check__(),则可以定义check_shape()和check_dtype()来描述形状和类型的检查逻辑。可以定义infer_value()方法(如PrimitiveWithInfer),用于常量传播。 - 了解更多如何自定义算子,请查看 `自定义算子 `_ 。 + 了解更多如何自定义算子,请查看 `自定义算子 `_ 。 参数: - **name** (str) - 当前Primitive的名称。 diff --git a/docs/api/api_python/ops/mindspore.ops.PrimitiveWithInfer.rst b/docs/api/api_python/ops/mindspore.ops.PrimitiveWithInfer.rst index d91e2762e43..775987c51e0 100644 --- a/docs/api/api_python/ops/mindspore.ops.PrimitiveWithInfer.rst +++ b/docs/api/api_python/ops/mindspore.ops.PrimitiveWithInfer.rst @@ -9,7 +9,7 @@ mindspore.ops.PrimitiveWithInfer 如果未定义__infer__(),则可以定义infer_shape()和infer_dtype()来描述shape和类型的推断逻辑。infer_value()用于常量传播。 - 关于如何自定义算子,请查看 `自定义算子 `_ 。 + 关于如何自定义算子,请查看 `自定义算子 `_ 。 参数: - **name** (str) - 当前Primitive的名称。 diff --git a/docs/api/api_python/ops/mindspore.ops.ReduceScatter.rst b/docs/api/api_python/ops/mindspore.ops.ReduceScatter.rst index 7516a59ee29..4622700fd47 100644 --- a/docs/api/api_python/ops/mindspore.ops.ReduceScatter.rst +++ b/docs/api/api_python/ops/mindspore.ops.ReduceScatter.rst @@ -31,4 +31,4 @@ mindspore.ops.ReduceScatter 教程样例: - `分布式集合通信原语 - ReduceScatter - `_ + `_ diff --git a/docs/api/api_python/ops/mindspore.ops.Size.rst b/docs/api/api_python/ops/mindspore.ops.Size.rst index 66f1a900b57..5e61c1b1071 100644 --- a/docs/api/api_python/ops/mindspore.ops.Size.rst +++ b/docs/api/api_python/ops/mindspore.ops.Size.rst @@ -8,7 +8,7 @@ mindspore.ops.Size 更多参考详见 :func:`mindspore.ops.size`。 输入: - - **input_x** (Tensor) - 输入参数,shape为 :math:`(x_1, x_2, ..., x_R)` 。数据类型为 `number `_ 。 + - **input_x** (Tensor) - 输入参数,shape为 :math:`(x_1, x_2, ..., x_R)` 。数据类型为 `number `_ 。 输出: int,表示 `input_x` 元素大小的Scalar。它的值为 :math:`size=x_1*x_2*...x_R` 。 diff --git a/docs/api/api_python/ops/mindspore.ops.Sub.rst b/docs/api/api_python/ops/mindspore.ops.Sub.rst index 2fee340e06f..45a5032e45d 100644 --- a/docs/api/api_python/ops/mindspore.ops.Sub.rst +++ b/docs/api/api_python/ops/mindspore.ops.Sub.rst @@ -13,7 +13,7 @@ mindspore.ops.Sub - 两个输入遵循隐式类型转换规则,使数据类型保持一致。 输入: - - **x** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 + - **x** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 - **y** (Union[Tensor, number.Number, bool]) - 第二个输入,当第一个输入是Tensor时,第二个输入应该是一个number.Number或bool值,或数据类型为number或bool的Tensor。 输出: diff --git a/docs/api/api_python/ops/mindspore.ops.Xlogy.rst b/docs/api/api_python/ops/mindspore.ops.Xlogy.rst index d3bff352829..826566d0cd0 100644 --- a/docs/api/api_python/ops/mindspore.ops.Xlogy.rst +++ b/docs/api/api_python/ops/mindspore.ops.Xlogy.rst @@ -8,7 +8,7 @@ 更多参考详见 :func:`mindspore.ops.xlogy`。 输入: - - **x** (Union[Tensor, number.Number, bool]) - 第一个输入为数值型。数据类型为 `number `_ 或 `bool_ `_ 。 + - **x** (Union[Tensor, number.Number, bool]) - 第一个输入为数值型。数据类型为 `number `_ 或 `bool_ `_ 。 - **y** (Union[Tensor, number.Number, bool]) - 第二个输入为数值型。当第一个输入是Tensor或数据类型为数值型或bool的Tensor时, 则第二个输入是数值型或bool。当第一个输入是Scalar时,则第二个输入必须是数据类型为数值型或bool的Tensor。 输出: diff --git a/docs/api/api_python/ops/mindspore.ops.comm_note.rst b/docs/api/api_python/ops/mindspore.ops.comm_note.rst index ed74fb61f04..32ed3f43dea 100644 --- a/docs/api/api_python/ops/mindspore.ops.comm_note.rst +++ b/docs/api/api_python/ops/mindspore.ops.comm_note.rst @@ -1,7 +1,7 @@ 运行以下样例之前,需要配置好通信环境变量。 -针对Ascend设备,用户需要准备rank表,设置rank_id和device_id,详见 `rank table启动 `_ 。 +针对Ascend设备,用户需要准备rank表,设置rank_id和device_id,详见 `rank table启动 `_ 。 -针对GPU设备,用户需要准备host文件和mpi,详见 `mpirun启动 `_ 。 +针对GPU设备,用户需要准备host文件和mpi,详见 `mpirun启动 `_ 。 -针对CPU设备,用户需要编写动态组网启动脚本,详见 `动态组网启动 `_ 。 \ No newline at end of file +针对CPU设备,用户需要编写动态组网启动脚本,详见 `动态组网启动 `_ 。 \ No newline at end of file diff --git a/docs/api/api_python/ops/mindspore.ops.extend.func_add.rst b/docs/api/api_python/ops/mindspore.ops.extend.func_add.rst index 64fbd4d4720..b622b5eb874 100644 --- a/docs/api/api_python/ops/mindspore.ops.extend.func_add.rst +++ b/docs/api/api_python/ops/mindspore.ops.extend.func_add.rst @@ -18,12 +18,12 @@ mindspore.ops.extend.add 参数: - **input** (Union[Tensor, number.Number, bool]) - 第一个输入是一个 number.Number、 一个 bool 或一个数据类型为 - `number `_ 或 - `bool_ `_ 的Tensor。 + `number `_ 或 + `bool_ `_ 的Tensor。 - **other** (Union[Tensor, number.Number, bool]) - 第二个输入,是一个 number.Number、 一个 bool 或一个数据类型为 - `number `_ 或 - `bool_ `_ 的Tensor。 + `number `_ 或 + `bool_ `_ 的Tensor。 - **alpha** (number.Number) - 应用于 `other` 的缩放因子,默认值为1。 返回: diff --git a/docs/api/api_python/ops/mindspore.ops.extend.func_sub.rst b/docs/api/api_python/ops/mindspore.ops.extend.func_sub.rst index 39415879fd4..abec8850695 100644 --- a/docs/api/api_python/ops/mindspore.ops.extend.func_sub.rst +++ b/docs/api/api_python/ops/mindspore.ops.extend.func_sub.rst @@ -18,12 +18,12 @@ mindspore.ops.extend.sub 参数: - **input** (Union[Tensor, number.Number, bool]) - 第一个输入是一个 number.Number、 一个 bool 或一个数据类型为 - `number `_ 或 - `bool_ `_ 的Tensor。 + `number `_ 或 + `bool_ `_ 的Tensor。 - **other** (Union[Tensor, number.Number, bool]) - 第二个输入,是一个 number.Number、 一个 bool 或一个数据类型为 - `number `_ 或 - `bool_ `_ 的Tensor。 + `number `_ 或 + `bool_ `_ 的Tensor。 - **alpha** (number.Number) - 应用于 `other` 的缩放因子,默认值为1。 返回: diff --git a/docs/api/api_python/ops/mindspore.ops.func_add.rst b/docs/api/api_python/ops/mindspore.ops.func_add.rst index 68646bef19b..c7248847970 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_add.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_add.rst @@ -16,8 +16,8 @@ mindspore.ops.add - 当输入为Tensor的时候,维度应大于等于1。 参数: - - **input** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 - - **other** (Union[Tensor, number.Number, bool]) - 第二个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 + - **input** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 + - **other** (Union[Tensor, number.Number, bool]) - 第二个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 返回: Tensor,shape与输入 `input`,`other` 广播后的shape相同,数据类型为两个输入中精度较高的类型。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_concat.rst b/docs/api/api_python/ops/mindspore.ops.func_concat.rst index 387f6b031dc..2febd8e6aea 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_concat.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_concat.rst @@ -8,7 +8,7 @@ :func:`mindspore.ops.cat()` 的别名。 教程样例: - - `张量 Tensor - 张量运算 `_ - - `Vision Transformer图像分类 - 整体构建ViT `_ + - `张量 Tensor - 张量运算 `_ + - `Vision Transformer图像分类 - 整体构建ViT `_ - `RNN实现情感分类 - Dense - `_ \ No newline at end of file + `_ \ No newline at end of file diff --git a/docs/api/api_python/ops/mindspore.ops.func_coo_relu.rst b/docs/api/api_python/ops/mindspore.ops.func_coo_relu.rst index 66c46483e05..d888485ffde 100755 --- a/docs/api/api_python/ops/mindspore.ops.func_coo_relu.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_coo_relu.rst @@ -12,7 +12,7 @@ mindspore.ops.coo_relu 参数: - **x** (COOTensor) - coo_relu的输入,shape: :math:`(N, *)` ,其中 :math:`*` 表示任意数量的附加维度, - 其数据类型为 `number `_。 + 其数据类型为 `number `_。 返回: COOTensor,数据类型和shape与 `x` 相同。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_eye.rst b/docs/api/api_python/ops/mindspore.ops.func_eye.rst index ee76b9459e0..d919ceab1e4 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_eye.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_eye.rst @@ -11,7 +11,7 @@ mindspore.ops.eye 参数: - **n** (int) - 指定返回Tensor的行数。仅支持常量值。 - **m** (int,可选) - 指定返回Tensor的列数。仅支持常量值。默认值为None,返回Tensor的列数默认与n相等。 - - **dtype** (mindspore.dtype,可选) - 指定返回Tensor的数据类型。数据类型必须是 `bool_ `_ 或 `number `_ 。默认值为 ``None`` ,返回Tensor的数据类型默认为mindspore.float32。 + - **dtype** (mindspore.dtype,可选) - 指定返回Tensor的数据类型。数据类型必须是 `bool_ `_ 或 `number `_ 。默认值为 ``None`` ,返回Tensor的数据类型默认为mindspore.float32。 返回: Tensor,主对角线上为1,其余的元素为0。它的shape由 `n` 和 `m` 指定。数据类型由 `dtype` 指定。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_fill.rst b/docs/api/api_python/ops/mindspore.ops.func_fill.rst index d697e2e8ece..32fcea766ec 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_fill.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_fill.rst @@ -6,7 +6,7 @@ mindspore.ops.fill 创建一个指定shape的Tensor,并用指定值填充。 参数: - - **type** (mindspore.dtype) - 指定输出Tensor的数据类型。数据类型只支持 `bool_ `_ 和 `number `_ 。 + - **type** (mindspore.dtype) - 指定输出Tensor的数据类型。数据类型只支持 `bool_ `_ 和 `number `_ 。 - **shape** (Union(Tensor, tuple[int])) - 指定输出Tensor的shape。 - **value** (Union(Tensor, number.Number, bool)) - 用来填充输出Tensor的值。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_gather.rst b/docs/api/api_python/ops/mindspore.ops.func_gather.rst index 7c22b078602..58f353e1ee8 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_gather.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_gather.rst @@ -13,7 +13,7 @@ mindspore.ops.gather .. note:: 1. input_indices的值必须在 `[0, input_params.shape[axis])` 范围内。CPU与GPU平台越界访问将会抛出异常,Ascend平台越界访问的返回结果是未定义的。 - 2. Ascend平台上,input_params的数据类型当前不能是 `bool_ `_ 。 + 2. Ascend平台上,input_params的数据类型当前不能是 `bool_ `_ 。 参数: - **input_params** (Tensor) - 原始Tensor,shape为 :math:`(x_1, x_2, ..., x_R)` 。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_greater.rst b/docs/api/api_python/ops/mindspore.ops.func_greater.rst index 8feb774232a..d00423a5425 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_greater.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_greater.rst @@ -8,8 +8,8 @@ mindspore.ops.greater 更多参考详见 :func:`mindspore.ops.gt()`。 参数: - - **input** (Union[Tensor, Number]) - 第一个输入,是一个Number或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 - - **other** (Union[Tensor, Number]) - 第二个输入,是一个Number或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 + - **input** (Union[Tensor, Number]) - 第一个输入,是一个Number或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 + - **other** (Union[Tensor, Number]) - 第二个输入,是一个Number或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 返回: Tensor,shape与广播后的shape相同,数据类型为bool。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_greater_equal.rst b/docs/api/api_python/ops/mindspore.ops.func_greater_equal.rst index a66d71ed4c8..e92a10b37e8 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_greater_equal.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_greater_equal.rst @@ -8,7 +8,7 @@ mindspore.ops.greater_equal 更多参考详见 :func:`mindspore.ops.ge`。 参数: - - **input** (Union[Tensor, Number]) - 第一个输入,是一个Number或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 + - **input** (Union[Tensor, Number]) - 第一个输入,是一个Number或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 - **other** (Union[Tensor, Number]) - 第二个输入,当第一个输入是Tensor时,第二个输入应该是一个Number或数据类型为number或bool_的Tensor。当第一个输入是Scalar时,第二个输入必须是数据类型为number或bool_的Tensor。 返回: diff --git a/docs/api/api_python/ops/mindspore.ops.func_gt.rst b/docs/api/api_python/ops/mindspore.ops.func_gt.rst index 10b52ecf591..2609682658b 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_gt.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_gt.rst @@ -20,7 +20,7 @@ mindspore.ops.gt - 若输入的Tensor可以广播,则会把低维度通过复制该维度的值的方式扩展到另一个输入中对应的高维度。 参数: - - **input** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 + - **input** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 - **other** (Union[Tensor, number.Number, bool]) - 第二个输入,当第一个输入是Tensor时,第二个输入应该是一个number.Number或bool值,或数据类型为number或bool_的Tensor。当第一个输入是Scalar时,第二个输入必须是数据类型为number或bool_的Tensor。 返回: diff --git a/docs/api/api_python/ops/mindspore.ops.func_le.rst b/docs/api/api_python/ops/mindspore.ops.func_le.rst index a6fe1ace7d3..55ec4a1651b 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_le.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_le.rst @@ -17,7 +17,7 @@ mindspore.ops.le - 当输入是一个Tensor和一个Scalar时,Scalar只能是一个常数。 参数: - - **input** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 + - **input** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 - **other** (Union[Tensor, number.Number, bool]) - 第二个输入,当第一个输入是Tensor时,第二个输入应该是一个number.Number或bool值,或数据类型为number或bool_的Tensor。当第一个输入是Scalar时,第二个输入必须是数据类型为number或bool_的Tensor。 返回: diff --git a/docs/api/api_python/ops/mindspore.ops.func_mul.rst b/docs/api/api_python/ops/mindspore.ops.func_mul.rst index 0962820901e..ca08ad2284e 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_mul.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_mul.rst @@ -15,8 +15,8 @@ mindspore.ops.mul - 两个输入遵循隐式类型转换规则,使数据类型保持一致。 参数: - - **input** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 - - **other** (Union[Tensor, number.Number, bool]) - 第二个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 + - **input** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 + - **other** (Union[Tensor, number.Number, bool]) - 第二个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 返回: Tensor,shape与广播后的shape相同,数据类型为两个输入中精度较高的类型。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_pinv.rst b/docs/api/api_python/ops/mindspore.ops.func_pinv.rst index 49ea96c7e6e..81d89860ffc 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_pinv.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_pinv.rst @@ -8,18 +8,18 @@ mindspore.ops.pinv 本函数通过SVD计算。如果 :math:`x=U*S*V^{T}` ,则x的伪逆为 :math:`x^{+}=V*S^{+}*U^{T}` ,:math:`S^{+}` 为对S的对角线上的每个非零元素取倒数,零保留在原位。 支持批量矩阵,若x是批量矩阵,当atol或rtol为float时,则输出具有相同的批量维度。 - 若atol或rtol为Tensor,则其shape必须可广播到 `x.svd() `_ 返回的奇异值的shape。 + 若atol或rtol为Tensor,则其shape必须可广播到 `x.svd() `_ 返回的奇异值的shape。 若x.shape为 :math:`(B, M, N)` ,atol或rtol的shape为 :math:`(K, B)` ,则输出shape为 :math:`(K, B, N, M)` 。 当hermitian为True时,暂时仅支持实数域,默认输入x为实对称矩阵,因此不会在内部检查x,并且在计算中仅使用下三角部分。 当x的奇异值(或特征值范数,hermitian=True)小于阈值( :math:`max(atol, \sigma \cdot rtol)` , :math:`\sigma` 为最大奇异值或特征值)时,将其置为零,且在计算中不使用。 - 若rtol和atol均未指定并且x的shape(M, N),则rtol设置为 :math:`rtol=max(M, N)*\varepsilon` , :math:`\varepsilon` 为x.dtype的 `eps值 `_ 。 + 若rtol和atol均未指定并且x的shape(M, N),则rtol设置为 :math:`rtol=max(M, N)*\varepsilon` , :math:`\varepsilon` 为x.dtype的 `eps值 `_ 。 若rtol未指定且atol指定值大于零,则rtol设置为零。 .. note:: - 该函数在内部使用 `svd `_ - (或 `eigh `_ , `hermitian=True` ), + 该函数在内部使用 `svd `_ + (或 `eigh `_ , `hermitian=True` ), 所以和这些函数具有相同问题,详细信息请参阅svd()和eigh()中的警告。 参数: diff --git a/docs/api/api_python/ops/mindspore.ops.func_pow.rst b/docs/api/api_python/ops/mindspore.ops.func_pow.rst index 3ebbcad8c3a..c0533a45ca3 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_pow.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_pow.rst @@ -12,8 +12,8 @@ mindspore.ops.pow out_{i} = input_{i} ^{ exponent_{i}} 参数: - - **input** (Union[Tensor, Number]) - 第一个输入,是一个Number值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 - - **exponent** (Union[Tensor, Number]) - 第二个输入,是一个Number值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 + - **input** (Union[Tensor, Number]) - 第一个输入,是一个Number值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 + - **exponent** (Union[Tensor, Number]) - 第二个输入,是一个Number值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 返回: Tensor,shape与广播后的shape相同,数据类型为两个输入中精度较高的类型。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_size.rst b/docs/api/api_python/ops/mindspore.ops.func_size.rst index 8e6805bf3c5..b89e22207da 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_size.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_size.rst @@ -6,7 +6,7 @@ mindspore.ops.size 返回一个Scalar,类型为整数,表示输入Tensor的大小,即Tensor中元素的总数。 参数: - - **input_x** (Tensor) - 输入参数,shape为 :math:`(x_1, x_2, ..., x_R)` 。数据类型为 `number `_ 。 + - **input_x** (Tensor) - 输入参数,shape为 :math:`(x_1, x_2, ..., x_R)` 。数据类型为 `number `_ 。 返回: 整数,表示 `input_x` 元素大小的Scalar。它的值为 :math:`size=x_1*x_2*...x_R` 。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_stop_gradient.rst b/docs/api/api_python/ops/mindspore.ops.func_stop_gradient.rst index 627f4dc36ea..d730b35ba24 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_stop_gradient.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_stop_gradient.rst @@ -3,7 +3,7 @@ mindspore.ops.stop_gradient .. py:function:: mindspore.ops.stop_gradient(value) - 用于消除某个值对梯度的影响,例如截断来自于函数输出的梯度传播。更多细节请参考 `Stop Gradient `_ 。 + 用于消除某个值对梯度的影响,例如截断来自于函数输出的梯度传播。更多细节请参考 `Stop Gradient `_ 。 参数: - **value** (Any) - 需要被消除梯度影响的值。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_sub.rst b/docs/api/api_python/ops/mindspore.ops.func_sub.rst index 1362121e364..75299ed8dcf 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_sub.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_sub.rst @@ -15,7 +15,7 @@ mindspore.ops.sub - 两个输入遵循隐式类型转换规则,使数据类型保持一致。 参数: - - **input** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 + - **input** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number `_ 或 `bool_ `_ 的Tensor。 - **other** (Union[Tensor, number.Number, bool]) - 第二个输入,当第一个输入是Tensor时,第二个输入应该是一个number.Number或bool值,或数据类型为number或bool的Tensor。 返回: diff --git a/docs/api/api_python/ops/mindspore.ops.func_xlogy.rst b/docs/api/api_python/ops/mindspore.ops.func_xlogy.rst index 3eb03227d0f..f221dfb4107 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_xlogy.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_xlogy.rst @@ -14,7 +14,7 @@ mindspore.ops.xlogy - 在Ascend上, `input` 和 `other` 必须为float16或float32。 参数: - - **input** (Union[Tensor, number.Number, bool]) - 第一个输入为数值型。数据类型为 `number `_ 或 `bool_ `_ 。 + - **input** (Union[Tensor, number.Number, bool]) - 第一个输入为数值型。数据类型为 `number `_ 或 `bool_ `_ 。 - **other** (Union[Tensor, number.Number, bool]) - 第二个输入为数值型。当第一个输入是Tensor或数据类型为数值型或bool的Tensor时,则第二个输入是数值型或bool。当第一个输入是Scalar时,则第二个输入必须是数据类型为数值型或bool的Tensor。 返回: diff --git a/docs/api/api_python/samples/dataset/audio_gallery.ipynb b/docs/api/api_python/samples/dataset/audio_gallery.ipynb index 917eaa1819a..34649928ac4 100644 --- a/docs/api/api_python/samples/dataset/audio_gallery.ipynb +++ b/docs/api/api_python/samples/dataset/audio_gallery.ipynb @@ -7,10 +7,10 @@ "source": [ "# 音频变换样例库\n", "\n", - "[![下载Notebook](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/resource/_static/logo_notebook.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.3.q1/docs/api_python/samples/dataset/audio_gallery.ipynb) \n", - "[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/mindspore/blob/r2.3.q1/docs/api/api_python/samples/dataset/audio_gallery.ipynb)\n", + "[![下载Notebook](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_notebook.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/master/docs/api_python/samples/dataset/audio_gallery.ipynb) \n", + "[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/mindspore/blob/master/docs/api/api_python/samples/dataset/audio_gallery.ipynb)\n", "\n", - "此指南展示了[mindpore.dataset.audio](https://www.mindspore.cn/docs/zh-CN/r2.3.q1/api_python/mindspore.dataset.transforms.html#%E9%9F%B3%E9%A2%91)模块中各种变换的用法。" + "此指南展示了[mindpore.dataset.audio](https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.dataset.transforms.html#%E9%9F%B3%E9%A2%91)模块中各种变换的用法。" ] }, { @@ -96,7 +96,7 @@ "source": [ "## Spectrogram\n", "\n", - "从音频信号创建其频谱,可以使用[mindspore.dataset.audio.Spectrogram](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_audio/mindspore.dataset.audio.Spectrogram.html#mindspore.dataset.audio.Spectrogram)。" + "从音频信号创建其频谱,可以使用[mindspore.dataset.audio.Spectrogram](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_audio/mindspore.dataset.audio.Spectrogram.html#mindspore.dataset.audio.Spectrogram)。" ] }, { @@ -189,7 +189,7 @@ "source": [ "## GriffinLim\n", "\n", - "从线性幅度频谱图恢复信号波形, 可以使用 [mindspore.dataset.audio.GriffinLim](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_audio/mindspore.dataset.audio.GriffinLim.html#mindspore.dataset.audio.GriffinLim) 。" + "从线性幅度频谱图恢复信号波形, 可以使用 [mindspore.dataset.audio.GriffinLim](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_audio/mindspore.dataset.audio.GriffinLim.html#mindspore.dataset.audio.GriffinLim) 。" ] }, { @@ -260,7 +260,7 @@ "source": [ "## Mel Filter Bank\n", "\n", - "[mindspore.dataset.audio.melscale_fbanks](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_audio/mindspore.dataset.audio.melscale_fbanks.html#mindspore.dataset.audio.melscale_fbanks) 可以创建频率变换矩阵。" + "[mindspore.dataset.audio.melscale_fbanks](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_audio/mindspore.dataset.audio.melscale_fbanks.html#mindspore.dataset.audio.melscale_fbanks) 可以创建频率变换矩阵。" ] }, { @@ -307,7 +307,7 @@ "source": [ "## MelSpectrogram\n", "\n", - "[mindspore.dataset.audio.MelSpectrogram](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_audio/mindspore.dataset.audio.MelSpectrogram.html#mindspore.dataset.audio.MelSpectrogram) 可以计算原始音频信号的梅尔频谱。" + "[mindspore.dataset.audio.MelSpectrogram](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_audio/mindspore.dataset.audio.MelSpectrogram.html#mindspore.dataset.audio.MelSpectrogram) 可以计算原始音频信号的梅尔频谱。" ] }, { @@ -361,7 +361,7 @@ "source": [ "## MFCC\n", "\n", - "[mindspore.dataset.audio.MFCC](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_audio/mindspore.dataset.audio.MFCC.html#mindspore.dataset.audio.MFCC) 可以计算音频信号的梅尔频率倒谱系数。" + "[mindspore.dataset.audio.MFCC](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_audio/mindspore.dataset.audio.MFCC.html#mindspore.dataset.audio.MFCC) 可以计算音频信号的梅尔频率倒谱系数。" ] }, { @@ -424,7 +424,7 @@ "source": [ "## LFCC\n", "\n", - "[mindspore.dataset.audio.LFCC](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_audio/mindspore.dataset.audio.LFCC.html#mindspore.dataset.audio.LFCC) 可以计算音频信号的线性频率倒谱系数。\n" + "[mindspore.dataset.audio.LFCC](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_audio/mindspore.dataset.audio.LFCC.html#mindspore.dataset.audio.LFCC) 可以计算音频信号的线性频率倒谱系数。\n" ] }, { @@ -481,7 +481,7 @@ "source": [ "## 在数据Pipeline中加载和处理图像文件\n", "\n", - "使用 [mindspore.dataset.GeneratorDataset](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset/mindspore.dataset.GeneratorDataset.html#mindspore.dataset.GeneratorDataset) 将磁盘中的音频文件内容加载到数据Pipeline中,并进一步应用其他增强操作。" + "使用 [mindspore.dataset.GeneratorDataset](https://mindspore.cn/docs/zh-CN/master/api_python/dataset/mindspore.dataset.GeneratorDataset.html#mindspore.dataset.GeneratorDataset) 将磁盘中的音频文件内容加载到数据Pipeline中,并进一步应用其他增强操作。" ] }, { diff --git a/docs/api/api_python/samples/dataset/dataset_gallery.ipynb b/docs/api/api_python/samples/dataset/dataset_gallery.ipynb index f552af08bed..e82bf6a0f99 100644 --- a/docs/api/api_python/samples/dataset/dataset_gallery.ipynb +++ b/docs/api/api_python/samples/dataset/dataset_gallery.ipynb @@ -7,10 +7,10 @@ "source": [ "# 使用数据Pipeline加载 & 处理数据集\n", "\n", - "[![下载Notebook](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/resource/_static/logo_notebook.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.3.q1/docs/api_python/samples/dataset/dataset_gallery.ipynb) \n", - "[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/mindspore/blob/r2.3.q1/docs/api/api_python/samples/dataset/dataset_gallery.ipynb)\n", + "[![下载Notebook](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_notebook.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/master/docs/api_python/samples/dataset/dataset_gallery.ipynb) \n", + "[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/mindspore/blob/master/docs/api/api_python/samples/dataset/dataset_gallery.ipynb)\n", "\n", - "此指南展示了[mindspore.dataset](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/mindspore.dataset.html)模块中的各种用法。" + "此指南展示了[mindspore.dataset](https://mindspore.cn/docs/zh-CN/master/api_python/mindspore.dataset.html)模块中的各种用法。" ] }, { @@ -85,7 +85,7 @@ "source": [ "## 加载开源数据集\n", "\n", - "使用 [mindspore.dataset.MnistDataset](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset/mindspore.dataset.MnistDataset.html#mindspore.dataset.MnistDataset) 和 [mindspore.dataset.Cifar10Dataset](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset/mindspore.dataset.Cifar10Dataset.html#mindspore.dataset.Cifar10Dataset) 加载MNIST/Cifar10数据集。\n", + "使用 [mindspore.dataset.MnistDataset](https://mindspore.cn/docs/zh-CN/master/api_python/dataset/mindspore.dataset.MnistDataset.html#mindspore.dataset.MnistDataset) 和 [mindspore.dataset.Cifar10Dataset](https://mindspore.cn/docs/zh-CN/master/api_python/dataset/mindspore.dataset.Cifar10Dataset.html#mindspore.dataset.Cifar10Dataset) 加载MNIST/Cifar10数据集。\n", "\n", "示例展示了如何加载数据集文件并显示数据集的内容。\n", "\n", @@ -207,7 +207,7 @@ "source": [ "### 加载文件目录结构的数据集\n", "\n", - "对于ImageNet数据集或其他具有类似结构的数据集,建议使用 [mindspore.dataset.ImageFolderDataset](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset/mindspore.dataset.ImageFolderDataset.html#mindspore.dataset.ImageFolderDataset) 将数据集文件加载到数据Pipeline中。\n", + "对于ImageNet数据集或其他具有类似结构的数据集,建议使用 [mindspore.dataset.ImageFolderDataset](https://mindspore.cn/docs/zh-CN/master/api_python/dataset/mindspore.dataset.ImageFolderDataset.html#mindspore.dataset.ImageFolderDataset) 将数据集文件加载到数据Pipeline中。\n", "\n", "```text\n", "Structure of ImageNet dataset:\n", @@ -329,7 +329,7 @@ "\n", "`mindspore.dataset`模块提供了一些常用的公开数据集和标准格式数据集的加载API。\n", "\n", - "对于MindSpore暂不支持直接加载的数据集,[mindspore.dataset.GeneratorDataset](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset/mindspore.dataset.GeneratorDataset.html#mindspore.dataset.GeneratorDataset) 提供了一种自定义的方式加载和处理数据。\n", + "对于MindSpore暂不支持直接加载的数据集,[mindspore.dataset.GeneratorDataset](https://mindspore.cn/docs/zh-CN/master/api_python/dataset/mindspore.dataset.GeneratorDataset.html#mindspore.dataset.GeneratorDataset) 提供了一种自定义的方式加载和处理数据。\n", "\n", "`GeneratorDataset`支持通过可随机访问数据集对象、可迭代数据集对象和生成器(generator)构造自定义数据集。\n", "\n", diff --git a/docs/api/api_python/samples/dataset/text_gallery.ipynb b/docs/api/api_python/samples/dataset/text_gallery.ipynb index 5a624e1d2d7..8130ab6f5f6 100644 --- a/docs/api/api_python/samples/dataset/text_gallery.ipynb +++ b/docs/api/api_python/samples/dataset/text_gallery.ipynb @@ -7,10 +7,10 @@ "source": [ "# 文本变换样例库\n", "\n", - "[![下载Notebook](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/resource/_static/logo_notebook.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.3.q1/docs/api_python/samples/dataset/text_gallery.ipynb) \n", - "[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/mindspore/blob/r2.3.q1/docs/api/api_python/samples/dataset/text_gallery.ipynb)\n", + "[![下载Notebook](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_notebook.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/master/docs/api_python/samples/dataset/text_gallery.ipynb) \n", + "[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/mindspore/blob/master/docs/api/api_python/samples/dataset/text_gallery.ipynb)\n", "\n", - "此指南展示了[mindpore.dataset.text](https://www.mindspore.cn/docs/zh-CN/r2.3.q1/api_python/mindspore.dataset.transforms.html#%E6%96%87%E6%9C%AC)模块中各种变换的用法。" + "此指南展示了[mindpore.dataset.text](https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.dataset.transforms.html#%E6%96%87%E6%9C%AC)模块中各种变换的用法。" ] }, { @@ -73,7 +73,7 @@ "source": [ "## Vocab\n", "\n", - "[mindspore.dataset.text.Vocab](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_text/mindspore.dataset.text.Vocab.html#mindspore.dataset.text.Vocab) 用于存储多对字符与ID。其包含一个映射,可以将每个单词(str)映射到一个ID(int)。" + "[mindspore.dataset.text.Vocab](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_text/mindspore.dataset.text.Vocab.html#mindspore.dataset.text.Vocab) 用于存储多对字符与ID。其包含一个映射,可以将每个单词(str)映射到一个ID(int)。" ] }, { @@ -118,7 +118,7 @@ "source": [ "## AddToken\n", "\n", - "[mindspore.dataset.text.AddToken](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_text/mindspore.dataset.text.AddToken.html#mindspore.dataset.text.AddToken) 将分词(token)添加到序列的开头或结尾处。" + "[mindspore.dataset.text.AddToken](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_text/mindspore.dataset.text.AddToken.html#mindspore.dataset.text.AddToken) 将分词(token)添加到序列的开头或结尾处。" ] }, { @@ -151,7 +151,7 @@ "source": [ "## SentencePieceTokenizer\n", "\n", - "[mindspore.dataset.text.SentencePieceTokenizer](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_text/mindspore.dataset.text.SentencePieceTokenizer.html#mindspore.dataset.text.SentencePieceTokenizer) 使用SentencePiece分词器对字符串进行分词。\n" + "[mindspore.dataset.text.SentencePieceTokenizer](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_text/mindspore.dataset.text.SentencePieceTokenizer.html#mindspore.dataset.text.SentencePieceTokenizer) 使用SentencePiece分词器对字符串进行分词。\n" ] }, { @@ -192,7 +192,7 @@ "source": [ "## WordpieceTokenizer\n", "\n", - "[mindspore.dataset.text.WordpieceTokenizer](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_text/mindspore.dataset.text.WordpieceTokenizer.html#mindspore.dataset.text.WordpieceTokenizer) 将输入的字符串切分为子词。" + "[mindspore.dataset.text.WordpieceTokenizer](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_text/mindspore.dataset.text.WordpieceTokenizer.html#mindspore.dataset.text.WordpieceTokenizer) 将输入的字符串切分为子词。" ] }, { @@ -224,7 +224,7 @@ "source": [ "## 在数据Pipeline中加载和处理TXT文件\n", "\n", - "使用 [mindspore.dataset.TextFileDataset](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset/mindspore.dataset.TextFileDataset.html#mindspore.dataset.TextFileDataset) 将磁盘中的文本文件内容加载到数据Pipeline中,并应用分词器对其中的内容进行分词。" + "使用 [mindspore.dataset.TextFileDataset](https://mindspore.cn/docs/zh-CN/master/api_python/dataset/mindspore.dataset.TextFileDataset.html#mindspore.dataset.TextFileDataset) 将磁盘中的文本文件内容加载到数据Pipeline中,并应用分词器对其中的内容进行分词。" ] }, { diff --git a/docs/api/api_python/samples/dataset/vision_gallery.ipynb b/docs/api/api_python/samples/dataset/vision_gallery.ipynb index b1a0497d871..378d6086b39 100644 --- a/docs/api/api_python/samples/dataset/vision_gallery.ipynb +++ b/docs/api/api_python/samples/dataset/vision_gallery.ipynb @@ -7,10 +7,10 @@ "source": [ "# 视觉变换样例库\n", "\n", - "[![下载Notebook](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/resource/_static/logo_notebook.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.3.q1/docs/api_python/samples/dataset/vision_gallery.ipynb) \n", - "[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/mindspore/blob/r2.3.q1/docs/api/api_python/samples/dataset/vision_gallery.ipynb)\n", + "[![下载Notebook](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_notebook.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/master/docs/api_python/samples/dataset/vision_gallery.ipynb) \n", + "[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/mindspore/blob/master/docs/api/api_python/samples/dataset/vision_gallery.ipynb)\n", "\n", - "此指南展示了[mindpore.dataset.vision](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/mindspore.dataset.transforms.html#%E8%A7%86%E8%A7%89)模块中各种变换的用法。" + "此指南展示了[mindpore.dataset.vision](https://mindspore.cn/docs/zh-CN/master/api_python/mindspore.dataset.transforms.html#%E8%A7%86%E8%A7%89)模块中各种变换的用法。" ] }, { @@ -84,7 +84,7 @@ "\n", "### Pad\n", "\n", - "[mindspore.dataset.vision.Pad](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.Pad.html#mindspore.dataset.vision.Pad) 会对图像的边缘填充像素。" + "[mindspore.dataset.vision.Pad](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_vision/mindspore.dataset.vision.Pad.html#mindspore.dataset.vision.Pad) 会对图像的边缘填充像素。" ] }, { @@ -117,7 +117,7 @@ "source": [ "### Resize\n", "\n", - "[mindspore.dataset.vision.Resize](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.Resize.html#mindspore.dataset.vision.Resize) 会调整图像的尺寸大小。" + "[mindspore.dataset.vision.Resize](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_vision/mindspore.dataset.vision.Resize.html#mindspore.dataset.vision.Resize) 会调整图像的尺寸大小。" ] }, { @@ -150,7 +150,7 @@ "source": [ "### CenterCrop\n", "\n", - "[mindspore.dataset.vision.CenterCrop](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.CenterCrop.html#mindspore.dataset.vision.CenterCrop) 会在图像中裁剪出中心区域。" + "[mindspore.dataset.vision.CenterCrop](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_vision/mindspore.dataset.vision.CenterCrop.html#mindspore.dataset.vision.CenterCrop) 会在图像中裁剪出中心区域。" ] }, { @@ -183,7 +183,7 @@ "source": [ "### FiveCrop\n", "\n", - "[mindspore.dataset.vision.FiveCrop](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.FiveCrop.html#mindspore.dataset.vision.FiveCrop) 在图像的中心与四个角处分别裁剪指定尺寸大小的子图。" + "[mindspore.dataset.vision.FiveCrop](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_vision/mindspore.dataset.vision.FiveCrop.html#mindspore.dataset.vision.FiveCrop) 在图像的中心与四个角处分别裁剪指定尺寸大小的子图。" ] }, { @@ -216,7 +216,7 @@ "source": [ "### RandomPerspective\n", "\n", - "[mindspore.dataset.vision.RandomPerspective](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomPerspective.html#mindspore.dataset.vision.RandomPerspective) 会按照指定的概率对输入图像进行透视变换。" + "[mindspore.dataset.vision.RandomPerspective](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_vision/mindspore.dataset.vision.RandomPerspective.html#mindspore.dataset.vision.RandomPerspective) 会按照指定的概率对输入图像进行透视变换。" ] }, { @@ -248,7 +248,7 @@ "source": [ "### RandomRotation\n", "\n", - "[mindspore.dataset.vision.RandomRotation](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomRotation.html#mindspore.dataset.vision.RandomRotation) 会随机旋转输入图像。" + "[mindspore.dataset.vision.RandomRotation](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_vision/mindspore.dataset.vision.RandomRotation.html#mindspore.dataset.vision.RandomRotation) 会随机旋转输入图像。" ] }, { @@ -280,7 +280,7 @@ "source": [ "### RandomAffine\n", "\n", - "[mindspore.dataset.vision.RandomAffine](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomAffine.html#mindspore.dataset.vision.RandomAffine) 会对输入图像应用随机仿射变换。" + "[mindspore.dataset.vision.RandomAffine](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_vision/mindspore.dataset.vision.RandomAffine.html#mindspore.dataset.vision.RandomAffine) 会对输入图像应用随机仿射变换。" ] }, { @@ -312,7 +312,7 @@ "source": [ "### RandomCrop\n", "\n", - "[mindspore.dataset.vision.RandomCrop](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomCrop.html#mindspore.dataset.vision.RandomCrop) 会对输入图像进行随机区域的裁剪。\n", + "[mindspore.dataset.vision.RandomCrop](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_vision/mindspore.dataset.vision.RandomCrop.html#mindspore.dataset.vision.RandomCrop) 会对输入图像进行随机区域的裁剪。\n", "\n" ] }, @@ -345,7 +345,7 @@ "source": [ "### RandomResizedCrop\n", "\n", - "[mindspore.dataset.vision.RandomResizedCrop](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomResizedCrop.html#mindspore.dataset.vision.RandomResizedCrop) 会对输入图像进行随机裁剪,并将裁剪区域调整为指定的尺寸大小。" + "[mindspore.dataset.vision.RandomResizedCrop](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_vision/mindspore.dataset.vision.RandomResizedCrop.html#mindspore.dataset.vision.RandomResizedCrop) 会对输入图像进行随机裁剪,并将裁剪区域调整为指定的尺寸大小。" ] }, { @@ -387,7 +387,7 @@ "source": [ "### Grayscale\n", "\n", - "[mindspore.dataset.vision.Grayscale](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.Grayscale.html#mindspore.dataset.vision.Grayscale) 会将图像转换为灰度图。" + "[mindspore.dataset.vision.Grayscale](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_vision/mindspore.dataset.vision.Grayscale.html#mindspore.dataset.vision.Grayscale) 会将图像转换为灰度图。" ] }, { @@ -420,7 +420,7 @@ "source": [ "### RandomColorAdjust\n", "\n", - "[mindspore.dataset.vision.RandomColorAdjust](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomColorAdjust.html#mindspore.dataset.vision.RandomColorAdjust) 会随机调整输入图像的亮度、对比度、饱和度和色调。" + "[mindspore.dataset.vision.RandomColorAdjust](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_vision/mindspore.dataset.vision.RandomColorAdjust.html#mindspore.dataset.vision.RandomColorAdjust) 会随机调整输入图像的亮度、对比度、饱和度和色调。" ] }, { @@ -454,7 +454,7 @@ "source": [ "### GaussianBlur\n", "\n", - "[mindspore.dataset.vision.GaussianBlur](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.GaussianBlur.html#mindspore.dataset.vision.GaussianBlur) 会使用指定的高斯核对输入图像进行模糊处理。" + "[mindspore.dataset.vision.GaussianBlur](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_vision/mindspore.dataset.vision.GaussianBlur.html#mindspore.dataset.vision.GaussianBlur) 会使用指定的高斯核对输入图像进行模糊处理。" ] }, { @@ -488,7 +488,7 @@ "source": [ "### RandomInvert\n", "\n", - "[mindspore.dataset.vision.RandomInvert](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomInvert.html#mindspore.dataset.vision.RandomInvert) 会以给定的概率随机反转图像的颜色。\n", + "[mindspore.dataset.vision.RandomInvert](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_vision/mindspore.dataset.vision.RandomInvert.html#mindspore.dataset.vision.RandomInvert) 会以给定的概率随机反转图像的颜色。\n", "\n" ] }, @@ -523,7 +523,7 @@ "source": [ "### RandomPosterize\n", "\n", - "[mindspore.dataset.vision.RandomPosterize](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomPosterize.html#mindspore.dataset.vision.RandomPosterize) 会随机减少图像的颜色通道的比特位数,使图像变得高对比度和颜色鲜艳。\n", + "[mindspore.dataset.vision.RandomPosterize](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_vision/mindspore.dataset.vision.RandomPosterize.html#mindspore.dataset.vision.RandomPosterize) 会随机减少图像的颜色通道的比特位数,使图像变得高对比度和颜色鲜艳。\n", "\n" ] }, @@ -558,7 +558,7 @@ "source": [ "### RandomSolarize\n", "\n", - "[mindspore.dataset.vision.RandomSolarize](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomSolarize.html#mindspore.dataset.vision.RandomSolarize) 会随机翻转给定范围内的像素。\n", + "[mindspore.dataset.vision.RandomSolarize](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_vision/mindspore.dataset.vision.RandomSolarize.html#mindspore.dataset.vision.RandomSolarize) 会随机翻转给定范围内的像素。\n", "\n" ] }, @@ -593,7 +593,7 @@ "source": [ "### RandomAdjustSharpness\n", "\n", - "[mindspore.dataset.vision.RandomAdjustSharpness](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomAdjustSharpness.html#mindspore.dataset.vision.RandomAdjustSharpness) 会以给定的概率随机调整输入图像的锐度。\n", + "[mindspore.dataset.vision.RandomAdjustSharpness](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_vision/mindspore.dataset.vision.RandomAdjustSharpness.html#mindspore.dataset.vision.RandomAdjustSharpness) 会以给定的概率随机调整输入图像的锐度。\n", "\n" ] }, @@ -628,7 +628,7 @@ "source": [ "### RandomAutoContrast\n", "\n", - "[mindspore.dataset.vision.RandomAutoContrast](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomAutoContrast.html#mindspore.dataset.vision.RandomAutoContrast) 会以给定的概率自动调整图像的对比度。" + "[mindspore.dataset.vision.RandomAutoContrast](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_vision/mindspore.dataset.vision.RandomAutoContrast.html#mindspore.dataset.vision.RandomAutoContrast) 会以给定的概率自动调整图像的对比度。" ] }, { @@ -662,7 +662,7 @@ "source": [ "### RandomEqualize\n", "\n", - "[mindspore.dataset.vision.RandomEqualize](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomEqualize.html#mindspore.dataset.vision.RandomEqualize) 会以给定的概率随机对输入图像进行直方图均衡化。\n", + "[mindspore.dataset.vision.RandomEqualize](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_vision/mindspore.dataset.vision.RandomEqualize.html#mindspore.dataset.vision.RandomEqualize) 会以给定的概率随机对输入图像进行直方图均衡化。\n", "\n" ] }, @@ -707,7 +707,7 @@ "source": [ "### AutoAugment\n", "\n", - "[mindspore.dataset.vision.AutoAugment](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.AutoAugment.html#mindspore.dataset.vision.AutoAugment) 会应用AutoAugment数据增强方法,增强的实现基于基于论文[AutoAugment: Learning Augmentation Strategies from Data](https://arxiv.org/pdf/1805.09501.pdf)。" + "[mindspore.dataset.vision.AutoAugment](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_vision/mindspore.dataset.vision.AutoAugment.html#mindspore.dataset.vision.AutoAugment) 会应用AutoAugment数据增强方法,增强的实现基于基于论文[AutoAugment: Learning Augmentation Strategies from Data](https://arxiv.org/pdf/1805.09501.pdf)。" ] }, { @@ -739,7 +739,7 @@ "source": [ "### RandAugment\n", "\n", - "[mindspore.dataset.vision.RandAugment](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandAugment.html#mindspore.dataset.vision.RandAugment) 会对输入图像应用RandAugment数据增强方法,增强的实现基于基于论文[RandAugment: Learning Augmentation Strategies from Data](https://arxiv.org/pdf/1909.13719.pdf)。\n", + "[mindspore.dataset.vision.RandAugment](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_vision/mindspore.dataset.vision.RandAugment.html#mindspore.dataset.vision.RandAugment) 会对输入图像应用RandAugment数据增强方法,增强的实现基于基于论文[RandAugment: Learning Augmentation Strategies from Data](https://arxiv.org/pdf/1909.13719.pdf)。\n", "\n" ] }, @@ -774,7 +774,7 @@ "source": [ "### TrivialAugmentWide\n", "\n", - "[mindspore.dataset.vision.TrivialAugmentWide](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.TrivialAugmentWide.html#mindspore.dataset.vision.TrivialAugmentWide)会对输入图像应用TrivialAugmentWide数据增强方法,增强的实现基于基于论文[TrivialAugmentWide: Tuning-free Yet State-of-the-Art Data Augmentation](https://arxiv.org/abs/2103.10158)。\n" + "[mindspore.dataset.vision.TrivialAugmentWide](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_vision/mindspore.dataset.vision.TrivialAugmentWide.html#mindspore.dataset.vision.TrivialAugmentWide)会对输入图像应用TrivialAugmentWide数据增强方法,增强的实现基于基于论文[TrivialAugmentWide: Tuning-free Yet State-of-the-Art Data Augmentation](https://arxiv.org/abs/2103.10158)。\n" ] }, { @@ -812,7 +812,7 @@ "\n", "### RandomHorizontalFlip\n", "\n", - "[mindspore.dataset.vision.RandomHorizontalFlip](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomHorizontalFlip.html#mindspore.dataset.vision.RandomHorizontalFlip)会对输入图像进行水平随机翻转。" + "[mindspore.dataset.vision.RandomHorizontalFlip](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_vision/mindspore.dataset.vision.RandomHorizontalFlip.html#mindspore.dataset.vision.RandomHorizontalFlip)会对输入图像进行水平随机翻转。" ] }, { @@ -846,7 +846,7 @@ "source": [ "### RandomVerticalFlip\n", "\n", - "[mindspore.dataset.vision.RandomVerticalFlip](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomVerticalFlip.html#mindspore.dataset.vision.RandomVerticalFlip) 会对输入图像进行垂直随机翻转。" + "[mindspore.dataset.vision.RandomVerticalFlip](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_vision/mindspore.dataset.vision.RandomVerticalFlip.html#mindspore.dataset.vision.RandomVerticalFlip) 会对输入图像进行垂直随机翻转。" ] }, { @@ -880,7 +880,7 @@ "source": [ "### RandomApply\n", "\n", - "[mindspore.dataset.transforms.RandomApply](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset_transforms/mindspore.dataset.transforms.RandomApply.html#mindspore.dataset.transforms.RandomApply) 可以指定一组数据增强处理及其被应用的概率,在运算时按概率随机应用其中的增强处理。" + "[mindspore.dataset.transforms.RandomApply](https://mindspore.cn/docs/zh-CN/master/api_python/dataset_transforms/mindspore.dataset.transforms.RandomApply.html#mindspore.dataset.transforms.RandomApply) 可以指定一组数据增强处理及其被应用的概率,在运算时按概率随机应用其中的增强处理。" ] }, { @@ -916,7 +916,7 @@ "source": [ "## 在数据Pipeline中加载和处理图像文件\n", "\n", - "使用 [mindspore.dataset.ImageFolderDataset](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/dataset/mindspore.dataset.ImageFolderDataset.html#mindspore.dataset.ImageFolderDataset) 将磁盘中的图像文件内容加载到数据Pipeline中,并进一步应用其他增强操作。" + "使用 [mindspore.dataset.ImageFolderDataset](https://mindspore.cn/docs/zh-CN/master/api_python/dataset/mindspore.dataset.ImageFolderDataset.html#mindspore.dataset.ImageFolderDataset) 将磁盘中的图像文件内容加载到数据Pipeline中,并进一步应用其他增强操作。" ] }, { diff --git a/docs/api/api_python/samples/ops/communicate_ops.md b/docs/api/api_python/samples/ops/communicate_ops.md index 10b60706f57..597f02f2e41 100644 --- a/docs/api/api_python/samples/ops/communicate_ops.md +++ b/docs/api/api_python/samples/ops/communicate_ops.md @@ -1,10 +1,10 @@ # 分布式集合通信原语 - + 在分布式训练中涉及例如`AllReduce`、`ReduceScatter`、`AllGather`和`Broadcast`等通信操作进行数据传输,我们将在下述的章节分别阐述其含义和示例代码。 -下述每个章节中给出了使用4张GPU进行不同通信操作的示例。示例中的输出来自于0号卡`rank0`程序的结果。用户需要将下述每个章节代码另存为communication.py。因为涉及到多卡程序,用户需要通过`mpirun`命令去启动communication.py。其中`mpirun`命令需要安装OpenMPI以及NCCL,对应的安装请参考[mpirun启动](https://www.mindspore.cn/tutorials/experts/zh-CN/r2.3.q1/parallel/mpirun.html)。准备好communication.py后,在命令行中输入如下启动命令,即可启动多卡程序: +下述每个章节中给出了使用4张GPU进行不同通信操作的示例。示例中的输出来自于0号卡`rank0`程序的结果。用户需要将下述每个章节代码另存为communication.py。因为涉及到多卡程序,用户需要通过`mpirun`命令去启动communication.py。其中`mpirun`命令需要安装OpenMPI以及NCCL,对应的安装请参考[mpirun启动](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/mpirun.html)。准备好communication.py后,在命令行中输入如下启动命令,即可启动多卡程序: ```bash mpirun -output-filename log -merge-stderr-to-stdout -np 4 python communication.py diff --git a/docs/api/api_python/samples/rewrite/rewrite_tutorial.md b/docs/api/api_python/samples/rewrite/rewrite_tutorial.md index 8ab5f9e9e50..6008de29074 100644 --- a/docs/api/api_python/samples/rewrite/rewrite_tutorial.md +++ b/docs/api/api_python/samples/rewrite/rewrite_tutorial.md @@ -1,8 +1,8 @@ # 使用ReWrite修改网络 -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/mindspore/blob/r2.3.q1/docs/api/api_python/samples/rewrite/rewrite_tutorial.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/mindspore/blob/master/docs/api/api_python/samples/rewrite/rewrite_tutorial.md) -此指南展示了[mindspore.rewrite](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/mindspore.rewrite.html)模块中API的各种用法。 +此指南展示了[mindspore.rewrite](https://mindspore.cn/docs/zh-CN/master/api_python/mindspore.rewrite.html)模块中API的各种用法。 ## 功能介绍 @@ -14,9 +14,9 @@ ReWrite模块提供了一组新的接口,用户可以使用这组接口为一 ## 创建SymbolTree 当用户需要使用ReWrite模块对一个网络进行修改时,首先需要基于该网络的实例创建一个SymbolTree,使用的接口 -是 [mindspore.rewrite.SymbolTree.create](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.create) 。 +是 [mindspore.rewrite.SymbolTree.create](https://mindspore.cn/docs/zh-CN/master/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.create) 。 -通过接口 [mindspore.rewrite.SymbolTree.get_code](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.get_code) 可以查看当前SymbolTree里存储的网络代码。 +通过接口 [mindspore.rewrite.SymbolTree.get_code](https://mindspore.cn/docs/zh-CN/master/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.get_code) 可以查看当前SymbolTree里存储的网络代码。 ``` python import mindspore.nn as nn @@ -66,7 +66,7 @@ class MyNetOpt(nn.Cell): 新的网络还将当前工作目录保存到 ``sys.path`` 里,从而保证新网络运行时可以搜索到原网络依赖的模块。 -通过接口 [mindspore.rewrite.SymbolTree.print_node_tabulate](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.print_node_tabulate) 可以看到SymbolTree里存储的节点信息及节点拓扑关系。 +通过接口 [mindspore.rewrite.SymbolTree.print_node_tabulate](https://mindspore.cn/docs/zh-CN/master/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.print_node_tabulate) 可以看到SymbolTree里存储的节点信息及节点拓扑关系。 该接口依赖tabulate模块,安装指令为: ``pip install tabulate`` 。 ``` python @@ -126,8 +126,8 @@ NodeType.Output return return x [[0, ('relu', 0)]] ## 插入节点 -当需要在网络的前向计算过程中插入一行新的代码时,可以先使用接口 [mindspore.rewrite.Node.create_call_cell](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.Node.create_call_cell) 创建一个新 -的节点,然后使用接口 [mindspore.rewrite.SymbolTree.insert](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.insert) 将创建的节点插入到SymbolTree内。 +当需要在网络的前向计算过程中插入一行新的代码时,可以先使用接口 [mindspore.rewrite.Node.create_call_cell](https://mindspore.cn/docs/zh-CN/master/api_python/mindspore.rewrite.html#mindspore.rewrite.Node.create_call_cell) 创建一个新 +的节点,然后使用接口 [mindspore.rewrite.SymbolTree.insert](https://mindspore.cn/docs/zh-CN/master/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.insert) 将创建的节点插入到SymbolTree内。 ``` python from mindspore.rewrite import SymbolTree, Node, ScopedValue @@ -144,8 +144,8 @@ stree.print_node_tabulate() 在该样例中,插入节点的流程如下: 1. 首先创建了一个新的节点,使用的Cell是 ``nn.ReLU()`` ,输入输出均为 ``"x"`` ,节点名是 ``"new_relu"`` 。 -2. 接着通过 [mindspore.rewrite.SymbolTree.get_node](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.get_node) 方法获取dense节点。 -3. 最后通过 [mindspore.rewrite.SymbolTree.insert](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.insert) 方法将新创建的节点插入到dense节点后面。 +2. 接着通过 [mindspore.rewrite.SymbolTree.get_node](https://mindspore.cn/docs/zh-CN/master/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.get_node) 方法获取dense节点。 +3. 最后通过 [mindspore.rewrite.SymbolTree.insert](https://mindspore.cn/docs/zh-CN/master/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.insert) 方法将新创建的节点插入到dense节点后面。 运行结果如下: @@ -163,7 +163,7 @@ NodeType.Output return return x [[0, ('relu', 0)]] [] 可以看到,新的new_relu节点插入到dense节点和relu节点间,节点的拓扑结构随着节点插入自动更新。 其中,新节点对应代码里的 `self.new_relu` 定义在新网络的init函数里,使用传入的 `new_relu_cell` 作为实例。 -除了使用 [mindspore.rewrite.SymbolTree.get_node](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.get_node) 方法获取节点来指定插入位置,还可以通过 [mindspore.rewrite.SymbolTree.nodes](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.nodes) 来遍历节点,并使用 [mindspore.rewrite.Node.get_instance_type](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.Node.get_instance_type) 基于节点对应实例的类型来获取节点,确定插入位置。 +除了使用 [mindspore.rewrite.SymbolTree.get_node](https://mindspore.cn/docs/zh-CN/master/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.get_node) 方法获取节点来指定插入位置,还可以通过 [mindspore.rewrite.SymbolTree.nodes](https://mindspore.cn/docs/zh-CN/master/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.nodes) 来遍历节点,并使用 [mindspore.rewrite.Node.get_instance_type](https://mindspore.cn/docs/zh-CN/master/api_python/mindspore.rewrite.html#mindspore.rewrite.Node.get_instance_type) 基于节点对应实例的类型来获取节点,确定插入位置。 ``` python for node in stree.nodes(): @@ -171,10 +171,10 @@ for node in stree.nodes(): stree.insert(stree.after(node), new_node) ``` -如果希望插入新代码的输出不复用原始网络里的变量,可以在创建节点时使用 [mindspore.rewrite.SymbolTree.unique_name](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.unique_name) 得 +如果希望插入新代码的输出不复用原始网络里的变量,可以在创建节点时使用 [mindspore.rewrite.SymbolTree.unique_name](https://mindspore.cn/docs/zh-CN/master/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.unique_name) 得 到一个SymbolTree内不重名的变量名,作为节点的输出。 -然后在插入节点前,通过使用 [mindspore.rewrite.Node.set_arg](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.Node.set_arg) 修改节点输入变量名,设置哪些节点使用新的节点输出作为输入。 +然后在插入节点前,通过使用 [mindspore.rewrite.Node.set_arg](https://mindspore.cn/docs/zh-CN/master/api_python/mindspore.rewrite.html#mindspore.rewrite.Node.set_arg) 修改节点输入变量名,设置哪些节点使用新的节点输出作为输入。 ``` python from mindspore.rewrite import SymbolTree, Node, ScopedValue @@ -209,7 +209,7 @@ NodeType.Output return return x [[0, ('relu', 0)]] [] ## 删除节点 -当需要在网络的前向计算过程中删除一行代码时,可以使用接口 [mindspore.rewrite.SymbolTree.erase](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.erase) 来删除节点。 +当需要在网络的前向计算过程中删除一行代码时,可以使用接口 [mindspore.rewrite.SymbolTree.erase](https://mindspore.cn/docs/zh-CN/master/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.erase) 来删除节点。 节点删除后,符号树内剩余节点的拓扑关系会依据删除后的代码情况自动更新。 因此,当待删除的节点的输出被别的节点使用时,节点删除后,需要注意剩余节点的拓扑关系是否符合设计预期。 @@ -266,7 +266,7 @@ stree.erase(relu_node) stree.print_node_tabulate() ``` -在该样例中,拿到relu节点后,先使用接口 [mindspore.rewrite.Node.get_users](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.Node.get_users) 遍历使用relu节点的输出作为输入的节点,将这些 +在该样例中,拿到relu节点后,先使用接口 [mindspore.rewrite.Node.get_users](https://mindspore.cn/docs/zh-CN/master/api_python/mindspore.rewrite.html#mindspore.rewrite.Node.get_users) 遍历使用relu节点的输出作为输入的节点,将这些 节点的输入都改为relu节点的输入,然后再删除relu节点。这样的话,后续使用了relu节点输出 ``z`` 的地方就都改为使用relu节点输入 ``y`` 了。 具体的参数名修改策略取决于实际场景需求。 @@ -286,7 +286,7 @@ NodeType.Output return return y [[0, ('dense', 0)]] [] ## 替换节点 -当需要在网络的前向计算过程中替换代码时,可以使用接口 [mindspore.rewrite.SymbolTree.replace](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.replace) 来替换节点。 +当需要在网络的前向计算过程中替换代码时,可以使用接口 [mindspore.rewrite.SymbolTree.replace](https://mindspore.cn/docs/zh-CN/master/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.replace) 来替换节点。 ``` python from mindspore.rewrite import SymbolTree, Node, ScopedValue @@ -352,7 +352,7 @@ NodeType.Output return return y1 [[0, ('new_relu_1', 0)]] ## 返回新网络 -当对网络修改完毕后,就可以使用接口 [mindspore.rewrite.SymbolTree.get_network](https://mindspore.cn/docs/zh-CN/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.get_network) 得到修改后的网络实例了。 +当对网络修改完毕后,就可以使用接口 [mindspore.rewrite.SymbolTree.get_network](https://mindspore.cn/docs/zh-CN/master/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.get_network) 得到修改后的网络实例了。 ``` python from mindspore import Tensor diff --git a/docs/api/api_python/train/mindspore.train.Callback.rst b/docs/api/api_python/train/mindspore.train.Callback.rst index c3f57452af5..5601c90f434 100644 --- a/docs/api/api_python/train/mindspore.train.Callback.rst +++ b/docs/api/api_python/train/mindspore.train.Callback.rst @@ -11,7 +11,7 @@ mindspore.train.Callback 自定义Callback场景下,在类方法中通过 `RunContext.original_args()` 方法可以获取模型训练或推理过程中已有 的上下文信息,此信息为一个存储了已有属性的字典型变量。用户也可以在此信息中添加其他的自定义属性。此外, 通过调用 `request_stop` 方法来停止训练过程。有关自定义Callback的具体用法,请查看 - `回调机制Callback `_。 + `回调机制Callback `_。 .. py:method:: begin(run_context) diff --git a/docs/api/api_python/train/mindspore.train.Metric.rst b/docs/api/api_python/train/mindspore.train.Metric.rst index 1ee2b365d4f..306f0bfcfe8 100644 --- a/docs/api/api_python/train/mindspore.train.Metric.rst +++ b/docs/api/api_python/train/mindspore.train.Metric.rst @@ -18,7 +18,7 @@ mindspore.train.Metric 教程样例: - `评价指标 Metrics - 自定义Metrics - `_ + `_ .. py:method:: eval() :abstractmethod: @@ -30,7 +30,7 @@ mindspore.train.Metric 教程样例: - `评价指标 Metrics - 自定义Metrics - `_ + `_ .. py:method:: indexes :property: @@ -68,4 +68,4 @@ mindspore.train.Metric 教程样例: - `评价指标 Metrics - 自定义Metrics - `_ \ No newline at end of file + `_ \ No newline at end of file diff --git a/docs/api/api_python/train/mindspore.train.Model.rst b/docs/api/api_python/train/mindspore.train.Model.rst index 80a639cf04b..55155d74409 100644 --- a/docs/api/api_python/train/mindspore.train.Model.rst +++ b/docs/api/api_python/train/mindspore.train.Model.rst @@ -19,7 +19,7 @@ - **metrics** (Union[dict, set]) - 用于模型评估的一组评价函数。例如:{'accuracy', 'recall'}。默认值: ``None`` 。 - **eval_network** (Cell) - 用于评估的神经网络。未定义情况下,`Model` 会使用 `network` 和 `loss_fn` 封装一个 `eval_network` 。默认值: ``None`` 。 - **eval_indexes** (list) - 在定义 `eval_network` 的情况下使用。如果 `eval_indexes` 为默认值None,`Model` 会将 `eval_network` 的所有输出传给 `metrics` 。如果配置 `eval_indexes` ,必须包含三个元素,分别为损失值、预测值和标签在 `eval_network` 输出中的位置,此时,损失值将传给损失评价函数,预测值和标签将传给其他评价函数。推荐使用评价函数的 :func:`mindspore.train.Metric.set_indexes` 代替 `eval_indexes` 。默认值: ``None`` 。 - - **amp_level** (str) - `mindspore.amp.build_train_network `_ 的可选参数 `level` , `level` 为混合精度等级,该参数支持["O0", "O1", "O2", "O3", "auto"]。默认值: ``"O0"`` 。 + - **amp_level** (str) - `mindspore.amp.build_train_network `_ 的可选参数 `level` , `level` 为混合精度等级,该参数支持["O0", "O1", "O2", "O3", "auto"]。默认值: ``"O0"`` 。 - "O0": 不变化。 - "O1": 将白名单中的算子转为float16,剩余算子保持float32。白名单中的算子如下列表:[Conv1d, Conv2d, Conv3d, Conv1dTranspose, Conv2dTranspose, Conv3dTranspose, Dense, LSTMCell, RNNCell, GRUCell, MatMul, BatchMatMul, PReLU, ReLU, Ger]。 @@ -75,7 +75,7 @@ 教程样例: - `高阶封装:Model - 训练及保存模型 - `_ + `_ .. py:method:: eval_network :property: @@ -106,7 +106,7 @@ 教程样例: - `高阶封装:Model - 训练及保存模型 - `_ + `_ .. py:method:: infer_predict_layout(*predict_data, skip_backend_compile=False) diff --git a/docs/api/api_python/train/mindspore.train.RunContext.rst b/docs/api/api_python/train/mindspore.train.RunContext.rst index ca8a55bdab9..8b3d2dfcb4e 100644 --- a/docs/api/api_python/train/mindspore.train.RunContext.rst +++ b/docs/api/api_python/train/mindspore.train.RunContext.rst @@ -7,7 +7,7 @@ mindspore.train.RunContext `RunContext` 主要用于收集训练或推理过程中模型的上下文相关信息并作为入参传入callback对象中来实现信息的共享。 - Callback的类方法中,调用 `RunContext.original_args()` 可以获取模型当前的上下文信息,用户也可以为此信息添加额外的自定义属性,同时 `request_stop()` 方法可以控制训练过程的停止。具体用法请查看 `回调机制Callback `_。 + Callback的类方法中,调用 `RunContext.original_args()` 可以获取模型当前的上下文信息,用户也可以为此信息添加额外的自定义属性,同时 `request_stop()` 方法可以控制训练过程的停止。具体用法请查看 `回调机制Callback `_。 `RunContext.original_args()` 存储的模型信息为一个字典型变量,在训练和推理过程会存储不同的属性。详情如下: @@ -76,7 +76,7 @@ mindspore.train.RunContext 教程样例: - `回调机制 Callback - 自定义回调机制 - `_ + `_ .. py:method:: request_stop() @@ -86,4 +86,4 @@ mindspore.train.RunContext 教程样例: - `回调机制 Callback - 自定义终止训练 - `_ + `_ diff --git a/docs/api/api_python_en/mindspore.dataset.transforms.rst b/docs/api/api_python_en/mindspore.dataset.transforms.rst index 8b256ff9e4c..cef92253b16 100644 --- a/docs/api/api_python_en/mindspore.dataset.transforms.rst +++ b/docs/api/api_python_en/mindspore.dataset.transforms.rst @@ -46,7 +46,7 @@ Vision Example Gallery ^^^^^^^^^^^^^^^^ -Example gallery of using vision transform APIs, jump to `Load & Process Data With Dataset Pipeline `_. +Example gallery of using vision transform APIs, jump to `Load & Process Data With Dataset Pipeline `_. This guide presents various transforms and input/output results. Transforms @@ -169,7 +169,7 @@ Text Example Gallery ^^^^^^^^^^^^^^^^ -Example gallery of using vision transform APIs, jump to `Illustration of text transforms `_. +Example gallery of using vision transform APIs, jump to `Illustration of text transforms `_. This guide presents various transforms and input/output results. Transforms @@ -234,7 +234,7 @@ Audio Example Gallery ^^^^^^^^^^^^^^^^ -Example gallery of using vision transform APIs, jump to `Illustration of audio transforms `_. +Example gallery of using vision transform APIs, jump to `Illustration of audio transforms `_. This guide presents various transforms and input/output results. Transforms diff --git a/docs/api/api_python_en/mindspore.experimental.rst b/docs/api/api_python_en/mindspore.experimental.rst index 922a536f258..dd8534d4f53 100644 --- a/docs/api/api_python_en/mindspore.experimental.rst +++ b/docs/api/api_python_en/mindspore.experimental.rst @@ -39,7 +39,7 @@ LRScheduler subclass dynamically changes the learning rate by calling the `step` from mindspore.experimental import optim # Define the network structure of LeNet5. Refer to - # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py net = LeNet5() loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True) @@ -56,7 +56,7 @@ LRScheduler subclass dynamically changes the learning rate by calling the `step` return loss for epoch in range(6): # Create the dataset taking MNIST as an example. Refer to - # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/mnist.py + # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/mnist.py for data, label in create_dataset(need_download=False): train_step(data, label) diff --git a/docs/api/api_python_en/mindspore.nn.rst b/docs/api/api_python_en/mindspore.nn.rst index 0a47561ac54..7a34c6066c3 100644 --- a/docs/api/api_python_en/mindspore.nn.rst +++ b/docs/api/api_python_en/mindspore.nn.rst @@ -5,9 +5,9 @@ Neural Network Cell For building predefined building blocks or computational units in neural networks. -For more information about dynamic shape support status, please refer to `Dynamic Shape Support Status of nn Interface `_ . +For more information about dynamic shape support status, please refer to `Dynamic Shape Support Status of nn Interface `_ . -Compared with the previous version, the added, deleted and supported platforms change information of `mindspore.nn` operators in MindSpore, please refer to the link `mindspore.nn API Interface Change `_ . +Compared with the previous version, the added, deleted and supported platforms change information of `mindspore.nn` operators in MindSpore, please refer to the link `mindspore.nn API Interface Change `_ . Basic Block ----------- diff --git a/docs/api/api_python_en/mindspore.numpy.rst b/docs/api/api_python_en/mindspore.numpy.rst index 6fef138a254..dcae1c74235 100644 --- a/docs/api/api_python_en/mindspore.numpy.rst +++ b/docs/api/api_python_en/mindspore.numpy.rst @@ -664,7 +664,7 @@ The following are examples: ... Tensor(shape=[4], dtype=Float32, value= [ 2.00000000e+00, 2.00000000e+00, 2.00000000e+00, 2.00000000e+00])) - For more details, see `API GradOperation `_ . + For more details, see `API GradOperation `_ . - Use mindspore.set_context to control execution mode @@ -690,7 +690,7 @@ The following are examples: set_context(device_target="Ascend") ... - For more details, see `API mindspore.set_context `_ . + For more details, see `API mindspore.set_context `_ . - Use mindspore.numpy in MindSpore Deep Learning Models diff --git a/docs/api/api_python_en/mindspore.ops.primitive.rst b/docs/api/api_python_en/mindspore.ops.primitive.rst index 1708b68a346..e1ad2909fe5 100644 --- a/docs/api/api_python_en/mindspore.ops.primitive.rst +++ b/docs/api/api_python_en/mindspore.ops.primitive.rst @@ -3,12 +3,12 @@ mindspore.ops.primitive operators that can be used for constructor function of Cell -For more information about dynamic shape support status, please refer to `Dynamic Shape Support Status of primitive Interface `_ . +For more information about dynamic shape support status, please refer to `Dynamic Shape Support Status of primitive Interface `_ . -For more information about the support for the bfloat16 data type, please refer to `Support List `_ . +For more information about the support for the bfloat16 data type, please refer to `Support List `_ . For the details about the usage constraints of each operator in the operator parallel process, -refer to `Usage Constraints During Operator Parallel `_ . +refer to `Usage Constraints During Operator Parallel `_ . The module import method is as follows: @@ -16,7 +16,7 @@ The module import method is as follows: import mindspore.ops as ops -Compared with the previous version, the added, deleted and supported platforms change information of `mindspore.ops.primitive` operators in MindSpore, please refer to the link `mindspore.ops.primitive API Interface Change `_ . +Compared with the previous version, the added, deleted and supported platforms change information of `mindspore.ops.primitive` operators in MindSpore, please refer to the link `mindspore.ops.primitive API Interface Change `_ . Operator Primitives ------------------- @@ -615,15 +615,15 @@ Data Operation Operator Communication Operator ---------------------- -Distributed training involves communication operations for data transfer. For more details, refer to `Distributed Set Communication Primitives `_ . +Distributed training involves communication operations for data transfer. For more details, refer to `Distributed Set Communication Primitives `_ . Note that the APIs in the following list need to preset communication environment variables. For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the `rank table Startup \ -`_ for more details. +`_ for more details. For the GPU device, users need to prepare the host file and mpi, please see the `mpirun Startup \ -`_. +`_. For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster Startup \ -`_ . +`_ . .. msplatwarnautosummary:: :toctree: ops diff --git a/docs/api/api_python_en/mindspore.ops.rst b/docs/api/api_python_en/mindspore.ops.rst index 5daaf9a6896..4e622bab6c8 100644 --- a/docs/api/api_python_en/mindspore.ops.rst +++ b/docs/api/api_python_en/mindspore.ops.rst @@ -1,9 +1,9 @@ mindspore.ops ============== -For more information about dynamic shape support status, please refer to `Dynamic Shape Support Status of ops Interface `_ . +For more information about dynamic shape support status, please refer to `Dynamic Shape Support Status of ops Interface `_ . -Compared with the previous version, the added, deleted and supported platforms change information of `mindspore.ops` operators in MindSpore, please refer to the link `mindspore.ops API Interface Change `_. +Compared with the previous version, the added, deleted and supported platforms change information of `mindspore.ops` operators in MindSpore, please refer to the link `mindspore.ops API Interface Change `_. Neural Network Layer Functions ------------------------------ diff --git a/docs/api/api_python_en/mindspore.rewrite.rst b/docs/api/api_python_en/mindspore.rewrite.rst index 2556eb7f576..810114467b1 100644 --- a/docs/api/api_python_en/mindspore.rewrite.rst +++ b/docs/api/api_python_en/mindspore.rewrite.rst @@ -4,7 +4,7 @@ mindspore.rewrite The ReWrite module in MindSpore provides users with the ability to modify the network's forward computation process based on custom rules, such as inserting, deleting, and replacing statements. -For a quick start of using ReWrite, please refer to `Modifying Network With ReWrite `_ . +For a quick start of using ReWrite, please refer to `Modifying Network With ReWrite `_ . .. automodule:: mindspore.rewrite :exclude-members: SparseFunc, PatternEngine, PatternNode, VarNode, Replacement, sparsify, ArgType diff --git a/docs/api/api_python_en/mindspore.rst b/docs/api/api_python_en/mindspore.rst index 45daae1e434..b5f5646ad46 100644 --- a/docs/api/api_python_en/mindspore.rst +++ b/docs/api/api_python_en/mindspore.rst @@ -75,7 +75,7 @@ DataType ============================ ================= Type Description ============================ ================= - ``tensor`` MindSpore's ``tensor`` type. Data format uses NCHW. For details, see `tensor `_. + ``tensor`` MindSpore's ``tensor`` type. Data format uses NCHW. For details, see `tensor `_. ``bool_`` Boolean ``True`` or ``False``. ``int_`` Integer scalar. ``uint`` Unsigned integer scalar. diff --git a/docs/api/api_python_en/samples/dataset/audio_gallery.ipynb b/docs/api/api_python_en/samples/dataset/audio_gallery.ipynb index f6aa420d508..71c700c36d8 100644 --- a/docs/api/api_python_en/samples/dataset/audio_gallery.ipynb +++ b/docs/api/api_python_en/samples/dataset/audio_gallery.ipynb @@ -7,9 +7,9 @@ "source": [ "# Illustration of audio transforms\n", "\n", - "[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/mindspore/blob/r2.3.q1/docs/api/api_python_en/samples/dataset/audio_gallery.ipynb)\n", + "[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/mindspore/blob/master/docs/api/api_python_en/samples/dataset/audio_gallery.ipynb)\n", "\n", - "This example illustrates the various transforms available in the [mindspore.dataset.audio](https://www.mindspore.cn/docs/en/r2.3.q1/api_python/mindspore.dataset.transforms.html#module-mindspore.dataset.audio) module." + "This example illustrates the various transforms available in the [mindspore.dataset.audio](https://www.mindspore.cn/docs/en/master/api_python/mindspore.dataset.transforms.html#module-mindspore.dataset.audio) module." ] }, { @@ -95,7 +95,7 @@ "source": [ "## Spectrogram\n", "\n", - "To create a spectrogram from an audio signal, you can use [mindspore.dataset.audio.Spectrogram](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_audio/mindspore.dataset.audio.Spectrogram.html#mindspore.dataset.audio.Spectrogram)." + "To create a spectrogram from an audio signal, you can use [mindspore.dataset.audio.Spectrogram](https://mindspore.cn/docs/en/master/api_python/dataset_audio/mindspore.dataset.audio.Spectrogram.html#mindspore.dataset.audio.Spectrogram)." ] }, { @@ -188,7 +188,7 @@ "source": [ "## GriffinLim\n", "\n", - "To recover a waveform from a spectrogram, you can use [mindspore.dataset.audio.GriffinLim](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_audio/mindspore.dataset.audio.GriffinLim.html#mindspore.dataset.audio.GriffinLim)." + "To recover a waveform from a spectrogram, you can use [mindspore.dataset.audio.GriffinLim](https://mindspore.cn/docs/en/master/api_python/dataset_audio/mindspore.dataset.audio.GriffinLim.html#mindspore.dataset.audio.GriffinLim)." ] }, { @@ -259,7 +259,7 @@ "source": [ "## Mel Filter Bank\n", "\n", - "To generate frequency transformation matrix, use [mindspore.dataset.audio.melscale_fbanks](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_audio/mindspore.dataset.audio.melscale_fbanks.html#mindspore.dataset.audio.melscale_fbanks)." + "To generate frequency transformation matrix, use [mindspore.dataset.audio.melscale_fbanks](https://mindspore.cn/docs/en/master/api_python/dataset_audio/mindspore.dataset.audio.melscale_fbanks.html#mindspore.dataset.audio.melscale_fbanks)." ] }, { @@ -306,7 +306,7 @@ "source": [ "## MelSpectrogram\n", "\n", - "To create a mel-scale spectrogram for a raw audio signal, use [mindspore.dataset.audio.MelSpectrogram](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_audio/mindspore.dataset.audio.MelSpectrogram.html#mindspore.dataset.audio.MelSpectrogram)." + "To create a mel-scale spectrogram for a raw audio signal, use [mindspore.dataset.audio.MelSpectrogram](https://mindspore.cn/docs/en/master/api_python/dataset_audio/mindspore.dataset.audio.MelSpectrogram.html#mindspore.dataset.audio.MelSpectrogram)." ] }, { @@ -360,7 +360,7 @@ "source": [ "## MFCC\n", "\n", - "[mindspore.dataset.audio.MFCC](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_audio/mindspore.dataset.audio.MFCC.html#mindspore.dataset.audio.MFCC) returns Mel Frequency Cepstrum Coefficient for a raw audio signal." + "[mindspore.dataset.audio.MFCC](https://mindspore.cn/docs/en/master/api_python/dataset_audio/mindspore.dataset.audio.MFCC.html#mindspore.dataset.audio.MFCC) returns Mel Frequency Cepstrum Coefficient for a raw audio signal." ] }, { @@ -423,7 +423,7 @@ "source": [ "## LFCC\n", "\n", - "[mindspore.dataset.audio.LFCC](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_audio/mindspore.dataset.audio.LFCC.html#mindspore.dataset.audio.LFCC) returns Linear Frequency Cepstral Coefficient for a raw audio signal.\n" + "[mindspore.dataset.audio.LFCC](https://mindspore.cn/docs/en/master/api_python/dataset_audio/mindspore.dataset.audio.LFCC.html#mindspore.dataset.audio.LFCC) returns Linear Frequency Cepstral Coefficient for a raw audio signal.\n" ] }, { @@ -480,7 +480,7 @@ "source": [ "## Process Wav File In Dataset Pipeline\n", "\n", - "Use the [mindspore.dataset.GeneratorDataset](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset/mindspore.dataset.GeneratorDataset.html#mindspore.dataset.GeneratorDataset) to read wav files into dataset pipeline and then we can do further transforms based on pipeline." + "Use the [mindspore.dataset.GeneratorDataset](https://mindspore.cn/docs/en/master/api_python/dataset/mindspore.dataset.GeneratorDataset.html#mindspore.dataset.GeneratorDataset) to read wav files into dataset pipeline and then we can do further transforms based on pipeline." ] }, { diff --git a/docs/api/api_python_en/samples/dataset/dataset_gallery.ipynb b/docs/api/api_python_en/samples/dataset/dataset_gallery.ipynb index d9d5d83ccae..e62087aca42 100644 --- a/docs/api/api_python_en/samples/dataset/dataset_gallery.ipynb +++ b/docs/api/api_python_en/samples/dataset/dataset_gallery.ipynb @@ -7,9 +7,9 @@ "source": [ "# Load & Process Data With Dataset Pipeline\n", "\n", - "[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/mindspore/blob/r2.3.q1/docs/api/api_python_en/samples/dataset/dataset_gallery.ipynb)\n", + "[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/mindspore/blob/master/docs/api/api_python_en/samples/dataset/dataset_gallery.ipynb)\n", "\n", - "This example illustrates the various usages available in the [mindspore.dataset](https://www.mindspore.cn/docs/en/r2.3.q1/api_python/mindspore.dataset.html) module." + "This example illustrates the various usages available in the [mindspore.dataset](https://www.mindspore.cn/docs/en/master/api_python/mindspore.dataset.html) module." ] }, { @@ -84,7 +84,7 @@ "source": [ "## Load Open Source Datasets\n", "\n", - "Load MNIST/Cifar10 dataset with [mindspore.dataset.MnistDataset](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset/mindspore.dataset.MnistDataset.html#mindspore.dataset.MnistDataset) and [mindspore.dataset.Cifar10Dataset](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset/mindspore.dataset.Cifar10Dataset.html#mindspore.dataset.Cifar10Dataset).\n", + "Load MNIST/Cifar10 dataset with [mindspore.dataset.MnistDataset](https://mindspore.cn/docs/en/master/api_python/dataset/mindspore.dataset.MnistDataset.html#mindspore.dataset.MnistDataset) and [mindspore.dataset.Cifar10Dataset](https://mindspore.cn/docs/en/master/api_python/dataset/mindspore.dataset.Cifar10Dataset.html#mindspore.dataset.Cifar10Dataset).\n", "\n", "Examples shows how to load dataset files and show the content.\n", "\n", @@ -206,7 +206,7 @@ "source": [ "### Load Dataset In Folders\n", "\n", - "For ImageNet dataset or other datasets with similar structure, it is suggest to use [mindspore.dataset.ImageFolderDataset](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset/mindspore.dataset.ImageFolderDataset.html#mindspore.dataset.ImageFolderDataset) to load files into dataset pipeline.\n", + "For ImageNet dataset or other datasets with similar structure, it is suggest to use [mindspore.dataset.ImageFolderDataset](https://mindspore.cn/docs/en/master/api_python/dataset/mindspore.dataset.ImageFolderDataset.html#mindspore.dataset.ImageFolderDataset) to load files into dataset pipeline.\n", "\n", "```text\n", "Structure of ImageNet dataset:\n", @@ -328,7 +328,7 @@ "\n", "`mindspore.dataset` module provides the loading APIs for some common datasets and standard format datasets.\n", "\n", - "For those datasets that MindSpore does not support yet, [mindspore.dataset.GeneratorDataset](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset/mindspore.dataset.GeneratorDataset.html#mindspore.dataset.GeneratorDataset) provides ways for users to load and process their data manually.\n", + "For those datasets that MindSpore does not support yet, [mindspore.dataset.GeneratorDataset](https://mindspore.cn/docs/en/master/api_python/dataset/mindspore.dataset.GeneratorDataset.html#mindspore.dataset.GeneratorDataset) provides ways for users to load and process their data manually.\n", "\n", "`GeneratorDataset` supports constructing customized datasets from random-accessible objects, iterable objects and Python generator.\n", "\n", diff --git a/docs/api/api_python_en/samples/dataset/text_gallery.ipynb b/docs/api/api_python_en/samples/dataset/text_gallery.ipynb index e6a86f54af7..66b8e25f4b0 100644 --- a/docs/api/api_python_en/samples/dataset/text_gallery.ipynb +++ b/docs/api/api_python_en/samples/dataset/text_gallery.ipynb @@ -7,9 +7,9 @@ "source": [ "# Illustration of text transforms\n", "\n", - "[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/mindspore/blob/r2.3.q1/docs/api/api_python_en/samples/dataset/text_gallery.ipynb)\n", + "[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/mindspore/blob/master/docs/api/api_python_en/samples/dataset/text_gallery.ipynb)\n", "\n", - "This example illustrates the various transforms available in the [mindspore.dataset.text](https://www.mindspore.cn/docs/en/r2.3.q1/api_python/mindspore.dataset.transforms.html#module-mindspore.dataset.text) module.\n" + "This example illustrates the various transforms available in the [mindspore.dataset.text](https://www.mindspore.cn/docs/en/master/api_python/mindspore.dataset.transforms.html#module-mindspore.dataset.text) module.\n" ] }, { @@ -72,7 +72,7 @@ "source": [ "## Vocab\n", "\n", - "The [mindspore.dataset.text.Vocab](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_text/mindspore.dataset.text.Vocab.html#mindspore.dataset.text.Vocab) is used to save pairs of words and ids.\n", + "The [mindspore.dataset.text.Vocab](https://mindspore.cn/docs/en/master/api_python/dataset_text/mindspore.dataset.text.Vocab.html#mindspore.dataset.text.Vocab) is used to save pairs of words and ids.\n", "It contains a map that maps each word(str) to an id(int) or reverse." ] }, @@ -118,7 +118,7 @@ "source": [ "## AddToken\n", "\n", - "The [mindspore.dataset.text.AddToken](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_text/mindspore.dataset.text.AddToken.html#mindspore.dataset.text.AddToken) transform adds token to beginning or end of sequence.\n", + "The [mindspore.dataset.text.AddToken](https://mindspore.cn/docs/en/master/api_python/dataset_text/mindspore.dataset.text.AddToken.html#mindspore.dataset.text.AddToken) transform adds token to beginning or end of sequence.\n", "\n" ] }, @@ -152,7 +152,7 @@ "source": [ "## SentencePieceTokenizer\n", "\n", - "The [mindspore.dataset.text.SentencePieceTokenizer](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_text/mindspore.dataset.text.SentencePieceTokenizer.html#mindspore.dataset.text.SentencePieceTokenizer) transform tokenizes scalar string into tokens by sentencepiece.\n" + "The [mindspore.dataset.text.SentencePieceTokenizer](https://mindspore.cn/docs/en/master/api_python/dataset_text/mindspore.dataset.text.SentencePieceTokenizer.html#mindspore.dataset.text.SentencePieceTokenizer) transform tokenizes scalar string into tokens by sentencepiece.\n" ] }, { @@ -193,7 +193,7 @@ "source": [ "## WordpieceTokenizer\n", "\n", - "The [mindspore.dataset.text.WordpieceTokenizer](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_text/mindspore.dataset.text.WordpieceTokenizer.html#mindspore.dataset.text.WordpieceTokenizer) transform tokenizes the input text to subword tokens.\n", + "The [mindspore.dataset.text.WordpieceTokenizer](https://mindspore.cn/docs/en/master/api_python/dataset_text/mindspore.dataset.text.WordpieceTokenizer.html#mindspore.dataset.text.WordpieceTokenizer) transform tokenizes the input text to subword tokens.\n", "\n" ] }, @@ -226,7 +226,7 @@ "source": [ "## Process TXT File In Dataset Pipeline\n", "\n", - "Use [mindspore.dataset.TextFileDataset](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset/mindspore.dataset.TextFileDataset.html#mindspore.dataset.TextFileDataset) to read content of text into dataset pipeline and the perform tokenization on text." + "Use [mindspore.dataset.TextFileDataset](https://mindspore.cn/docs/en/master/api_python/dataset/mindspore.dataset.TextFileDataset.html#mindspore.dataset.TextFileDataset) to read content of text into dataset pipeline and the perform tokenization on text." ] }, { diff --git a/docs/api/api_python_en/samples/dataset/vision_gallery.ipynb b/docs/api/api_python_en/samples/dataset/vision_gallery.ipynb index adb3baf1403..edc0dc344ee 100644 --- a/docs/api/api_python_en/samples/dataset/vision_gallery.ipynb +++ b/docs/api/api_python_en/samples/dataset/vision_gallery.ipynb @@ -7,9 +7,9 @@ "source": [ "# Illustration of vision transforms\n", "\n", - "[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/mindspore/blob/r2.3.q1/docs/api/api_python_en/samples/dataset/vision_gallery.ipynb)\n", + "[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/mindspore/blob/master/docs/api/api_python_en/samples/dataset/vision_gallery.ipynb)\n", "\n", - "This example illustrates the various transforms available in the [mindspore.dataset.vision](https://www.mindspore.cn/docs/en/r2.3.q1/api_python/mindspore.dataset.transforms.html#module-mindspore.dataset.vision) module.\n" + "This example illustrates the various transforms available in the [mindspore.dataset.vision](https://www.mindspore.cn/docs/en/master/api_python/mindspore.dataset.transforms.html#module-mindspore.dataset.vision) module.\n" ] }, { @@ -83,7 +83,7 @@ "\n", "### Pad\n", "\n", - "The [mindspore.dataset.vision.Pad](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.Pad.html#mindspore.dataset.vision.Pad) transform pads the borders of image with some pixels." + "The [mindspore.dataset.vision.Pad](https://mindspore.cn/docs/en/master/api_python/dataset_vision/mindspore.dataset.vision.Pad.html#mindspore.dataset.vision.Pad) transform pads the borders of image with some pixels." ] }, { @@ -116,7 +116,7 @@ "source": [ "### Resize\n", "\n", - "The [mindspore.dataset.vision.Resize](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.Resize.html#mindspore.dataset.vision.Resize) transform resizes an image to a given size." + "The [mindspore.dataset.vision.Resize](https://mindspore.cn/docs/en/master/api_python/dataset_vision/mindspore.dataset.vision.Resize.html#mindspore.dataset.vision.Resize) transform resizes an image to a given size." ] }, { @@ -149,7 +149,7 @@ "source": [ "### CenterCrop\n", "\n", - "The [mindspore.dataset.vision.CenterCrop](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.CenterCrop.html#mindspore.dataset.vision.CenterCrop) transform crop the image at the center with given size." + "The [mindspore.dataset.vision.CenterCrop](https://mindspore.cn/docs/en/master/api_python/dataset_vision/mindspore.dataset.vision.CenterCrop.html#mindspore.dataset.vision.CenterCrop) transform crop the image at the center with given size." ] }, { @@ -182,7 +182,7 @@ "source": [ "### FiveCrop\n", "\n", - "The [mindspore.dataset.vision.FiveCrop](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.FiveCrop.html#mindspore.dataset.vision.FiveCrop) transform crops the given image into one central crop and four corners." + "The [mindspore.dataset.vision.FiveCrop](https://mindspore.cn/docs/en/master/api_python/dataset_vision/mindspore.dataset.vision.FiveCrop.html#mindspore.dataset.vision.FiveCrop) transform crops the given image into one central crop and four corners." ] }, { @@ -215,7 +215,7 @@ "source": [ "### RandomPerspective\n", "\n", - "The [mindspore.dataset.vision.RandomPerspective](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomPerspective.html#mindspore.dataset.vision.RandomPerspective) transform\n", + "The [mindspore.dataset.vision.RandomPerspective](https://mindspore.cn/docs/en/master/api_python/dataset_vision/mindspore.dataset.vision.RandomPerspective.html#mindspore.dataset.vision.RandomPerspective) transform\n", "performs random perspective transform on an image." ] }, @@ -248,7 +248,7 @@ "source": [ "### RandomRotation\n", "\n", - "The [mindspore.dataset.vision.RandomRotation](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomRotation.html#mindspore.dataset.vision.RandomRotation) transform\n", + "The [mindspore.dataset.vision.RandomRotation](https://mindspore.cn/docs/en/master/api_python/dataset_vision/mindspore.dataset.vision.RandomRotation.html#mindspore.dataset.vision.RandomRotation) transform\n", "rotates an image with random angle." ] }, @@ -281,7 +281,7 @@ "source": [ "### RandomAffine\n", "\n", - "The [mindspore.dataset.vision.RandomAffine](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomAffine.html#mindspore.dataset.vision.RandomAffine) transform performs random affine transform on an image." + "The [mindspore.dataset.vision.RandomAffine](https://mindspore.cn/docs/en/master/api_python/dataset_vision/mindspore.dataset.vision.RandomAffine.html#mindspore.dataset.vision.RandomAffine) transform performs random affine transform on an image." ] }, { @@ -313,7 +313,7 @@ "source": [ "### RandomCrop\n", "\n", - "The [mindspore.dataset.vision.RandomCrop](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomCrop.html#mindspore.dataset.vision.RandomCrop) transform crops an image at a random location.\n", + "The [mindspore.dataset.vision.RandomCrop](https://mindspore.cn/docs/en/master/api_python/dataset_vision/mindspore.dataset.vision.RandomCrop.html#mindspore.dataset.vision.RandomCrop) transform crops an image at a random location.\n", "\n" ] }, @@ -346,7 +346,7 @@ "source": [ "### RandomResizedCrop\n", "\n", - "The [mindspore.dataset.vision.RandomResizedCrop](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomResizedCrop.html#mindspore.dataset.vision.RandomResizedCrop) transform crops an image at a random location, and then resizes the crop to a given\n", + "The [mindspore.dataset.vision.RandomResizedCrop](https://mindspore.cn/docs/en/master/api_python/dataset_vision/mindspore.dataset.vision.RandomResizedCrop.html#mindspore.dataset.vision.RandomResizedCrop) transform crops an image at a random location, and then resizes the crop to a given\n", "size." ] }, @@ -389,7 +389,7 @@ "source": [ "### Grayscale\n", "\n", - "The [mindspore.dataset.vision.Grayscale](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.Grayscale.html#mindspore.dataset.vision.Grayscale) transform converts an image to grayscale." + "The [mindspore.dataset.vision.Grayscale](https://mindspore.cn/docs/en/master/api_python/dataset_vision/mindspore.dataset.vision.Grayscale.html#mindspore.dataset.vision.Grayscale) transform converts an image to grayscale." ] }, { @@ -422,7 +422,7 @@ "source": [ "### RandomColorAdjust\n", "\n", - "The [mindspore.dataset.vision.RandomColorAdjust](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomColorAdjust.html#mindspore.dataset.vision.RandomColorAdjust) transform randomly changes the brightness, contrast, saturation and hue of the input image." + "The [mindspore.dataset.vision.RandomColorAdjust](https://mindspore.cn/docs/en/master/api_python/dataset_vision/mindspore.dataset.vision.RandomColorAdjust.html#mindspore.dataset.vision.RandomColorAdjust) transform randomly changes the brightness, contrast, saturation and hue of the input image." ] }, { @@ -456,7 +456,7 @@ "source": [ "### GaussianBlur\n", "\n", - "The [mindspore.dataset.vision.GaussianBlur](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.GaussianBlur.html#mindspore.dataset.vision.GaussianBlur) transform\n", + "The [mindspore.dataset.vision.GaussianBlur](https://mindspore.cn/docs/en/master/api_python/dataset_vision/mindspore.dataset.vision.GaussianBlur.html#mindspore.dataset.vision.GaussianBlur) transform\n", "performs gaussian blur transform on an image." ] }, @@ -491,7 +491,7 @@ "source": [ "### RandomInvert\n", "\n", - "The [mindspore.dataset.vision.RandomInvert](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomInvert.html#mindspore.dataset.vision.RandomInvert) transform randomly inverts the colors of the given image.\n", + "The [mindspore.dataset.vision.RandomInvert](https://mindspore.cn/docs/en/master/api_python/dataset_vision/mindspore.dataset.vision.RandomInvert.html#mindspore.dataset.vision.RandomInvert) transform randomly inverts the colors of the given image.\n", "\n" ] }, @@ -526,7 +526,7 @@ "source": [ "### RandomPosterize\n", "\n", - "The [mindspore.dataset.vision.RandomPosterize](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomPosterize.html#mindspore.dataset.vision.RandomPosterize) transform randomly reduces the bit depth of the color channels of image to create a high contrast and vivid color image.\n" + "The [mindspore.dataset.vision.RandomPosterize](https://mindspore.cn/docs/en/master/api_python/dataset_vision/mindspore.dataset.vision.RandomPosterize.html#mindspore.dataset.vision.RandomPosterize) transform randomly reduces the bit depth of the color channels of image to create a high contrast and vivid color image.\n" ] }, { @@ -560,7 +560,7 @@ "source": [ "### RandomSolarize\n", "\n", - "The [mindspore.dataset.vision.RandomSolarize](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomSolarize.html#mindspore.dataset.vision.RandomSolarize) transform randomly solarizes the image by inverting pixel values within specified threshold.\n", + "The [mindspore.dataset.vision.RandomSolarize](https://mindspore.cn/docs/en/master/api_python/dataset_vision/mindspore.dataset.vision.RandomSolarize.html#mindspore.dataset.vision.RandomSolarize) transform randomly solarizes the image by inverting pixel values within specified threshold.\n", "\n" ] }, @@ -595,7 +595,7 @@ "source": [ "### RandomAdjustSharpness\n", "\n", - "The [mindspore.dataset.vision.RandomAdjustSharpness](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomAdjustSharpness.html#mindspore.dataset.vision.RandomAdjustSharpness) transform randomly adjusts the sharpness of the given image.\n", + "The [mindspore.dataset.vision.RandomAdjustSharpness](https://mindspore.cn/docs/en/master/api_python/dataset_vision/mindspore.dataset.vision.RandomAdjustSharpness.html#mindspore.dataset.vision.RandomAdjustSharpness) transform randomly adjusts the sharpness of the given image.\n", "\n" ] }, @@ -630,7 +630,7 @@ "source": [ "### RandomAutoContrast\n", "\n", - "The [mindspore.dataset.vision.RandomAutoContrast](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomAutoContrast.html#mindspore.dataset.vision.RandomAutoContrast) transform randomly applies autocontrast to the given image." + "The [mindspore.dataset.vision.RandomAutoContrast](https://mindspore.cn/docs/en/master/api_python/dataset_vision/mindspore.dataset.vision.RandomAutoContrast.html#mindspore.dataset.vision.RandomAutoContrast) transform randomly applies autocontrast to the given image." ] }, { @@ -664,7 +664,7 @@ "source": [ "### RandomEqualize\n", "\n", - "The [mindspore.dataset.vision.RandomEqualize](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomEqualize.html#mindspore.dataset.vision.RandomEqualize) transform randomly equalizes the histogram of the given image.\n", + "The [mindspore.dataset.vision.RandomEqualize](https://mindspore.cn/docs/en/master/api_python/dataset_vision/mindspore.dataset.vision.RandomEqualize.html#mindspore.dataset.vision.RandomEqualize) transform randomly equalizes the histogram of the given image.\n", "\n" ] }, @@ -709,7 +709,7 @@ "source": [ "### AutoAugment\n", "\n", - "The [mindspore.dataset.vision.AutoAugment](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.AutoAugment.html#mindspore.dataset.vision.AutoAugment) transform applies AutoAugment method based on [AutoAugment: Learning Augmentation Strategies from Data](https://arxiv.org/pdf/1805.09501.pdf)." + "The [mindspore.dataset.vision.AutoAugment](https://mindspore.cn/docs/en/master/api_python/dataset_vision/mindspore.dataset.vision.AutoAugment.html#mindspore.dataset.vision.AutoAugment) transform applies AutoAugment method based on [AutoAugment: Learning Augmentation Strategies from Data](https://arxiv.org/pdf/1805.09501.pdf)." ] }, { @@ -741,7 +741,7 @@ "source": [ "### RandAugment\n", "\n", - "The [mindspore.dataset.vision.RandAugment](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandAugment.html#mindspore.dataset.vision.RandAugment) applies RandAugment method based on [RandAugment: Learning Augmentation Strategies from Data](https://arxiv.org/pdf/1909.13719.pdf).\n", + "The [mindspore.dataset.vision.RandAugment](https://mindspore.cn/docs/en/master/api_python/dataset_vision/mindspore.dataset.vision.RandAugment.html#mindspore.dataset.vision.RandAugment) applies RandAugment method based on [RandAugment: Learning Augmentation Strategies from Data](https://arxiv.org/pdf/1909.13719.pdf).\n", "\n" ] }, @@ -776,7 +776,7 @@ "source": [ "### TrivialAugmentWide\n", "\n", - "The [mindspore.dataset.vision.TrivialAugmentWide](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.TrivialAugmentWide.html#mindspore.dataset.vision.TrivialAugmentWide) applies TrivialAugmentWide method based on [TrivialAugmentWide: Tuning-free Yet State-of-the-Art Data Augmentation](https://arxiv.org/abs/2103.10158).\n", + "The [mindspore.dataset.vision.TrivialAugmentWide](https://mindspore.cn/docs/en/master/api_python/dataset_vision/mindspore.dataset.vision.TrivialAugmentWide.html#mindspore.dataset.vision.TrivialAugmentWide) applies TrivialAugmentWide method based on [TrivialAugmentWide: Tuning-free Yet State-of-the-Art Data Augmentation](https://arxiv.org/abs/2103.10158).\n", "\n" ] }, @@ -816,7 +816,7 @@ "\n", "### RandomHorizontalFlip\n", "\n", - "The [mindspore.dataset.vision.RandomHorizontalFlip](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomHorizontalFlip.html#mindspore.dataset.vision.RandomHorizontalFlip) transform performs horizontal flip of an image, with a given probability." + "The [mindspore.dataset.vision.RandomHorizontalFlip](https://mindspore.cn/docs/en/master/api_python/dataset_vision/mindspore.dataset.vision.RandomHorizontalFlip.html#mindspore.dataset.vision.RandomHorizontalFlip) transform performs horizontal flip of an image, with a given probability." ] }, { @@ -850,7 +850,7 @@ "source": [ "### RandomVerticalFlip\n", "\n", - "The [mindspore.dataset.vision.RandomVerticalFlip](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_vision/mindspore.dataset.vision.RandomVerticalFlip.html#mindspore.dataset.vision.RandomVerticalFlip) transform performs vertical flip of an image, with a given probability." + "The [mindspore.dataset.vision.RandomVerticalFlip](https://mindspore.cn/docs/en/master/api_python/dataset_vision/mindspore.dataset.vision.RandomVerticalFlip.html#mindspore.dataset.vision.RandomVerticalFlip) transform performs vertical flip of an image, with a given probability." ] }, { @@ -884,7 +884,7 @@ "source": [ "### RandomApply\n", "\n", - "The [mindspore.dataset.transforms.RandomApply](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset_transforms/mindspore.dataset.transforms.RandomApply.html#mindspore.dataset.transforms.RandomApply) transform randomly applies a list of transforms, with a given probability." + "The [mindspore.dataset.transforms.RandomApply](https://mindspore.cn/docs/en/master/api_python/dataset_transforms/mindspore.dataset.transforms.RandomApply.html#mindspore.dataset.transforms.RandomApply) transform randomly applies a list of transforms, with a given probability." ] }, { @@ -920,7 +920,7 @@ "source": [ "## Process Image File In Dataset Pipeline\n", "\n", - "Use the [mindspore.dataset.ImageFolderDataset](https://mindspore.cn/docs/en/r2.3.q1/api_python/dataset/mindspore.dataset.ImageFolderDataset.html#mindspore.dataset.ImageFolderDataset) to read image content into dataset pipeline and then we can do further transforms based on pipeline." + "Use the [mindspore.dataset.ImageFolderDataset](https://mindspore.cn/docs/en/master/api_python/dataset/mindspore.dataset.ImageFolderDataset.html#mindspore.dataset.ImageFolderDataset) to read image content into dataset pipeline and then we can do further transforms based on pipeline." ] }, { diff --git a/docs/api/api_python_en/samples/ops/communicate_ops.md b/docs/api/api_python_en/samples/ops/communicate_ops.md index 324b3b1b2a3..21a1014a7ca 100644 --- a/docs/api/api_python_en/samples/ops/communicate_ops.md +++ b/docs/api/api_python_en/samples/ops/communicate_ops.md @@ -1,10 +1,10 @@ # Distributed Set Communication Primitives - + Distributed training involves communication operations such as `AllReduce`, `ReduceScatter`, `AllGather` and `Broadcast` for data transfer, and we will explain their meaning and sample code in the following sections. -Examples of different communication operations by using 4 GPUs are given in each of the following sections. The output in the example comes from the results of the `rank0` program on card 0. The user needs to save each section code below as a separate communication.py. Because it involves a multi-card program, the user needs to go through the `mpirun` command to start communication.py. The `mpirun` commands requires the installation of OpenMPI as well as NCCL, and please refer to [here](https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/mpirun.html) for the corresponding installation. +Examples of different communication operations by using 4 GPUs are given in each of the following sections. The output in the example comes from the results of the `rank0` program on card 0. The user needs to save each section code below as a separate communication.py. Because it involves a multi-card program, the user needs to go through the `mpirun` command to start communication.py. The `mpirun` commands requires the installation of OpenMPI as well as NCCL, and please refer to [here](https://www.mindspore.cn/tutorials/experts/en/master/parallel/mpirun.html) for the corresponding installation. ```bash mpirun -output-filename log -merge-stderr-to-stdout -np 4 python communication.py diff --git a/docs/api/api_python_en/samples/rewrite/rewrite_tutorial.md b/docs/api/api_python_en/samples/rewrite/rewrite_tutorial.md index 3a8a3502e1c..8060506cbdb 100644 --- a/docs/api/api_python_en/samples/rewrite/rewrite_tutorial.md +++ b/docs/api/api_python_en/samples/rewrite/rewrite_tutorial.md @@ -1,8 +1,8 @@ # Modifying Network With ReWrite -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/mindspore/blob/r2.3.q1/docs/api/api_python_en/samples/rewrite/rewrite_tutorial.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/mindspore/blob/master/docs/api/api_python_en/samples/rewrite/rewrite_tutorial.md) -This example illustrates the various usages of APIs available in the [mindspore.rewrite](https://www.mindspore.cn/docs/en/r2.3.q1/api_python/mindspore.rewrite.html) module. +This example illustrates the various usages of APIs available in the [mindspore.rewrite](https://www.mindspore.cn/docs/en/master/api_python/mindspore.rewrite.html) module. ## Function Introduction @@ -16,9 +16,9 @@ network code, or a new network instance can be obtained. ## Creating A SymbolTree When we need to modify a network using the ReWrite module, we first need to create a SymbolTree based on the instance -of the network, using the interface [mindspore.rewrite.SymbolTree.create](https://mindspore.cn/docs/en/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.create) . +of the network, using the interface [mindspore.rewrite.SymbolTree.create](https://mindspore.cn/docs/en/master/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.create) . -Through the use of the interface [mindspore.rewrite.SymbolTree.get_code](https://mindspore.cn/docs/en/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.get_code), we can view the network code currently +Through the use of the interface [mindspore.rewrite.SymbolTree.get_code](https://mindspore.cn/docs/en/master/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.get_code), we can view the network code currently stored in SymbolTree. ``` python @@ -73,7 +73,7 @@ the function. The new network also saves the current working directory to ``sys.path`` , ensuring that modules that the original network depends on can be searched for when running on the new network. -By using the interface [mindspore.rewrite.SymbolTree.print_node_tabulate](https://mindspore.cn/docs/en/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.print_node_tabulate) , we can see the node information and node +By using the interface [mindspore.rewrite.SymbolTree.print_node_tabulate](https://mindspore.cn/docs/en/master/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.print_node_tabulate) , we can see the node information and node topology relationships stored in the SymbolTree. This interface depends on the tabulate module, and the installation command is: ``pip install tabulate`` . @@ -139,8 +139,8 @@ expanded into three lines of code and then converted into three corresponding no ## Inserting Nodes When we need to insert a new line of code during the forward computation of the network, we can first create a new node -using interface [mindspore.rewrite.Node.create_call_cell](https://mindspore.cn/docs/en/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.Node.create_call_cell) , and then insert the created node into SymbolTree -using interface [mindspore.rewrite.SymbolTree.insert](https://mindspore.cn/docs/en/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.insert) . +using interface [mindspore.rewrite.Node.create_call_cell](https://mindspore.cn/docs/en/master/api_python/mindspore.rewrite.html#mindspore.rewrite.Node.create_call_cell) , and then insert the created node into SymbolTree +using interface [mindspore.rewrite.SymbolTree.insert](https://mindspore.cn/docs/en/master/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.insert) . ``` python from mindspore.rewrite import SymbolTree, Node, ScopedValue @@ -157,8 +157,8 @@ stree.print_node_tabulate() In this example, the process for inserting a node is as follows: 1. Firstly, a new node is created. The Cell used is ``nn.ReLU()`` , the input and output are ``"x"`` , and the node name is ``"new_relu"`` . -2. Then the dense node is fetched by using [mindspore.rewrite.SymbolTree.get_node](https://mindspore.cn/docs/en/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.get_node) . -3. Finally, the newly created node is inserted after the dense node through [mindspore.rewrite.SymbolTree.insert](https://mindspore.cn/docs/en/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.insert) . +2. Then the dense node is fetched by using [mindspore.rewrite.SymbolTree.get_node](https://mindspore.cn/docs/en/master/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.get_node) . +3. Finally, the newly created node is inserted after the dense node through [mindspore.rewrite.SymbolTree.insert](https://mindspore.cn/docs/en/master/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.insert) . The results are as follows: @@ -178,8 +178,8 @@ node is automatically updated with the node insertion. The definition of `self.new_relu` in the code of new node is saved in the init function of the new network, using parameter `new_relu_cell` as the instance. -In addition to getting nodes using [mindspore.rewrite.SymbolTree.get_node](https://mindspore.cn/docs/en/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.get_node) to specify the insertion location, we can -also iterate through nodes by [mindspore.rewrite.SymbolTree.nodes](https://mindspore.cn/docs/en/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.nodes) and use [mindspore.rewrite.Node.get_instance_type](https://mindspore.cn/docs/en/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.Node.get_instance_type) +In addition to getting nodes using [mindspore.rewrite.SymbolTree.get_node](https://mindspore.cn/docs/en/master/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.get_node) to specify the insertion location, we can +also iterate through nodes by [mindspore.rewrite.SymbolTree.nodes](https://mindspore.cn/docs/en/master/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.nodes) and use [mindspore.rewrite.Node.get_instance_type](https://mindspore.cn/docs/en/master/api_python/mindspore.rewrite.html#mindspore.rewrite.Node.get_instance_type) to get the node and determine the insertion position based on the type of corresponding instance of node. ``` python @@ -189,10 +189,10 @@ for node in stree.nodes(): ``` If we want the output of new code to be inserted does not reuse variables from the original network, we can -use [mindspore.rewrite.SymbolTree.unique_name](https://mindspore.cn/docs/en/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.unique_name) to get an variable name that are not duplicated in the SymbolTree +use [mindspore.rewrite.SymbolTree.unique_name](https://mindspore.cn/docs/en/master/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.unique_name) to get an variable name that are not duplicated in the SymbolTree as the output of node when creating nodes. -Then, before inserting the node, we can modify the node input variable name by using [mindspore.rewrite.Node.set_arg](https://mindspore.cn/docs/en/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.Node.set_arg) +Then, before inserting the node, we can modify the node input variable name by using [mindspore.rewrite.Node.set_arg](https://mindspore.cn/docs/en/master/api_python/mindspore.rewrite.html#mindspore.rewrite.Node.set_arg) to set which nodes use the new node output as input. ``` python @@ -230,7 +230,7 @@ It can be seen that the output variable name of new node is an unnamed name ``x_ ## Deleting Nodes When we need to delete a line of code during the forward computation of the network, we can use the interface -[mindspore.rewrite.SymbolTree.erase](https://mindspore.cn/docs/en/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.erase) to delete the node. +[mindspore.rewrite.SymbolTree.erase](https://mindspore.cn/docs/en/master/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.erase) to delete the node. After the node is deleted, the topological relationship of the remaining nodes in the symbol tree will be automatically updated according to the code of network after deletion. @@ -292,7 +292,7 @@ stree.erase(relu_node) stree.print_node_tabulate() ``` -In this example, after getting the relu node, first we use the interface [mindspore.rewrite.Node.get_users](https://mindspore.cn/docs/en/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.Node.get_users) to +In this example, after getting the relu node, first we use the interface [mindspore.rewrite.Node.get_users](https://mindspore.cn/docs/en/master/api_python/mindspore.rewrite.html#mindspore.rewrite.Node.get_users) to iterate through the nodes that use the output of relu node as input, change the input of these nodes to the input of relu node, and then delete the relu node. In this case, the subsequent use of the relu node output ``z`` will be changed to the relu node input ``y`` . @@ -315,7 +315,7 @@ It can be seen that after deleting the relu node, the value of the last return n ## Replacing Nodes When we need to replace code during the forward computation of network, we can replace the node with the -interface [mindspore.rewrite.SymbolTree.replace](https://mindspore.cn/docs/en/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.replace) . +interface [mindspore.rewrite.SymbolTree.replace](https://mindspore.cn/docs/en/master/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.replace) . ``` python from mindspore.rewrite import SymbolTree, Node, ScopedValue @@ -384,7 +384,7 @@ updated to the output of the first new node. ## Returning A New Network -When the network is modified, we can use the interface [mindspore.rewrite.SymbolTree.get_network](https://mindspore.cn/docs/en/r2.3.q1/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.get_network) to get the +When the network is modified, we can use the interface [mindspore.rewrite.SymbolTree.get_network](https://mindspore.cn/docs/en/master/api_python/mindspore.rewrite.html#mindspore.rewrite.SymbolTree.get_network) to get the modified network instance. ``` python diff --git a/docs/api/lite_api_python/mindspore_lite/mindspore_lite.Context.rst b/docs/api/lite_api_python/mindspore_lite/mindspore_lite.Context.rst index bdac4cd52e4..96ea887f3b7 100644 --- a/docs/api/lite_api_python/mindspore_lite/mindspore_lite.Context.rst +++ b/docs/api/lite_api_python/mindspore_lite/mindspore_lite.Context.rst @@ -56,7 +56,7 @@ mindspore_lite.Context 获取和设置分布式推理的通信分组信息。 在Pipeline并行场景下,不同Stage设备节点处于不同的通信分组中,在模型导出时,通过接口 - [mindspore.set_auto_parallel_context](https://www.mindspore.cn/docs/zh-CN/r2.3.q1/api_python/mindspore/mindspore.set_auto_parallel_context.html) + [mindspore.set_auto_parallel_context](https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore/mindspore.set_auto_parallel_context.html) 设置 `group_ckpt_save_file` 参数导出分组文件信息。另外,非Pipeline并行场景,如果存在通信算子涉及局部分组,同样需通过 `group_ckpt_save_file` 参数获取分组文件信息。 .. py:method:: target diff --git a/docs/api/lite_api_python/mindspore_lite/mindspore_lite.Converter.rst b/docs/api/lite_api_python/mindspore_lite/mindspore_lite.Converter.rst index d5abcd4eb02..235a5d650c0 100644 --- a/docs/api/lite_api_python/mindspore_lite/mindspore_lite.Converter.rst +++ b/docs/api/lite_api_python/mindspore_lite/mindspore_lite.Converter.rst @@ -11,7 +11,7 @@ mindspore_lite.Converter 2. 将MindSpore模型转换生成MindSpore模型或MindSpore Lite模型。 - 推荐转换为MindSpore模型。目前,支持转换为MindSpore Lite模型,但是该选项将会被废弃。如有需要,请使用 `推理模型离线转换 `_ 来替换Python接口。Model接口和ModelParallelRunner接口只支持MindSpore模型。 + 推荐转换为MindSpore模型。目前,支持转换为MindSpore Lite模型,但是该选项将会被废弃。如有需要,请使用 `推理模型离线转换 `_ 来替换Python接口。Model接口和ModelParallelRunner接口只支持MindSpore模型。 .. note:: 请先构造Converter类,再通过执行Converter.convert()方法生成模型。 @@ -23,11 +23,11 @@ mindspore_lite.Converter 执行转换,将第三方模型转换为MindSpore模型或MindSpore Lite模型。 参数: - - **fmk_type** (FmkType) - 输入模型框架类型。选项有 ``FmkType.TF`` 、 ``FmkType.CAFFE`` 、 ``FmkType.ONNX`` 、 ``FmkType.MINDIR`` 、 ``FmkType.TFLITE`` 、 ``FmkType.PYTORCH`` 。有关详细信息,请参见 `框架类型 `_ 。 + - **fmk_type** (FmkType) - 输入模型框架类型。选项有 ``FmkType.TF`` 、 ``FmkType.CAFFE`` 、 ``FmkType.ONNX`` 、 ``FmkType.MINDIR`` 、 ``FmkType.TFLITE`` 、 ``FmkType.PYTORCH`` 。有关详细信息,请参见 `框架类型 `_ 。 - **model_file** (str) - 转换时的输入模型文件路径。例如: ``"/home/user/model.prototxt"`` 。选项有TF: ``"model.pb"`` 、 CAFFE: ``"model.prototxt"`` 、 ONNX: ``"model.onnx"`` 、 MINDIR: ``"model.mindir"`` 、 TFLITE: ``"model.tflite"`` 、 PYTORCH: ``"model.pt or model.pth"``。 - **output_file** (str) - 转换时的输出模型文件路径。可自动生成.ms或.mindir后缀。如果将 `save_type` 设置为 ``ModelType.MINDIR`` ,那么将生成MindSpore模型,该模型使用.mindir作为后缀。如果将 `save_type` 设置为 ``ModelType.MINDIR_LITE`` ,那么将生成MindSpore Lite模型,该模型使用.ms作为后缀。例如:输入模型为"/home/user/model.prototxt",将 `save_type` 设置为 ``ModelType.MINDIR`` ,它将生成名为model.prototxt.mindir的模型在/home/user/路径下。 - **weight_file** (str,可选) - 输入模型权重文件。仅当输入模型框架类型为 ``FmkType.CAFFE`` 时必选,Caffe模型一般分为两个文件: `model.prototxt` 是模型结构,对应 `model_file` 参数; `model.caffemodel` 是模型权值文件,对应 `weight_file` 参数。例如:"/home/user/model.caffemodel"。默认值: ``""`` ,表示无模型权重文件。 - - **config_file** (str,可选) - Converter的配置文件,可配置训练后量化或离线拆分算子并行或禁用算子融合功能并将插件设置为so路径等功能。 `config_file` 配置文件采用 `key = value` 的方式定义相关参数,有关训练后量化的配置参数,请参见 `训练后量化 `_ 。有关扩展的配置参数,请参见 `扩展配置 `_ 。例如:"/home/user/model.cfg"。默认值: ``""`` ,表示不设置Converter的配置文件。 + - **config_file** (str,可选) - Converter的配置文件,可配置训练后量化或离线拆分算子并行或禁用算子融合功能并将插件设置为so路径等功能。 `config_file` 配置文件采用 `key = value` 的方式定义相关参数,有关训练后量化的配置参数,请参见 `训练后量化 `_ 。有关扩展的配置参数,请参见 `扩展配置 `_ 。例如:"/home/user/model.cfg"。默认值: ``""`` ,表示不设置Converter的配置文件。 异常: - **TypeError** - `fmk_type` 不是FmkType类型。 @@ -109,7 +109,7 @@ mindspore_lite.Converter 获取量化模型输入Tensor的数据类型。 返回: - DataType,量化模型输入Tensor的数据类型。仅当模型输入Tensor的量化参数( `scale` 和 `zero point` )都具备时有效。默认与原始模型输入Tensor的data type保持一致。支持以下4种数据类型: ``DataType.FLOAT32`` 、 ``DataType.INT8`` 、 ``DataType.UINT8`` 、 ``DataType.UNKNOWN`` 。默认值: ``DataType.FLOAT32`` 。有关详细信息,请参见 `数据类型 `_ 。 + DataType,量化模型输入Tensor的数据类型。仅当模型输入Tensor的量化参数( `scale` 和 `zero point` )都具备时有效。默认与原始模型输入Tensor的data type保持一致。支持以下4种数据类型: ``DataType.FLOAT32`` 、 ``DataType.INT8`` 、 ``DataType.UINT8`` 、 ``DataType.UNKNOWN`` 。默认值: ``DataType.FLOAT32`` 。有关详细信息,请参见 `数据类型 `_ 。 - **DataType.FLOAT32** - 32位浮点数。 - **DataType.INT8** - 8位整型数。 @@ -122,7 +122,7 @@ mindspore_lite.Converter 获取模型的输入format。 返回: - Format,模型的输入format。仅对四维输入有效。支持以下2种输入格式: ``Format.NCHW`` 、 ``Format.NHWC`` 。默认值: ``Format.NHWC`` 。有关详细信息,请参见 `数据格式 `_ 。 + Format,模型的输入format。仅对四维输入有效。支持以下2种输入格式: ``Format.NCHW`` 、 ``Format.NHWC`` 。默认值: ``Format.NHWC`` 。有关详细信息,请参见 `数据格式 `_ 。 - **Format.NCHW** - 按批次N、通道C、高度H和宽度W的顺序存储Tensor数据。 - **Format.NHWC** - 按批次N、高度H、宽度W和通道C的顺序存储Tensor数据。 @@ -161,7 +161,7 @@ mindspore_lite.Converter 获取量化模型输出Tensor的data type。 返回: - DataType,量化模型输出Tensor的data type。仅当模型输出Tensor的量化参数(scale和zero point)都具备时有效。默认与原始模型输出Tensor的data type保持一致。支持以下4种数据类型:``DataType.FLOAT32`` 、 ``DataType.INT8`` 、 ``DataType.UINT8`` 、 ``DataType.UNKNOWN``。有关详细信息,请参见 `数据类型 `_ 。 + DataType,量化模型输出Tensor的data type。仅当模型输出Tensor的量化参数(scale和zero point)都具备时有效。默认与原始模型输出Tensor的data type保持一致。支持以下4种数据类型:``DataType.FLOAT32`` 、 ``DataType.INT8`` 、 ``DataType.UINT8`` 、 ``DataType.UNKNOWN``。有关详细信息,请参见 `数据类型 `_ 。 - **DataType.FLOAT32** - 32位浮点数。 - **DataType.INT8** - 8位整型数。 @@ -182,7 +182,7 @@ mindspore_lite.Converter 获取导出模型文件的类型。 返回: - ModelType,导出模型文件的类型。选项有 ``ModelType.MINDIR`` 、 ``ModelType.MINDIR_LITE`` 。推荐转换为MindSpore模型。目前,支持转换为MindSpore Lite模型,但是该选项将会被废弃。有关详细信息,请参见 `模型类型 `_ 。 + ModelType,导出模型文件的类型。选项有 ``ModelType.MINDIR`` 、 ``ModelType.MINDIR_LITE`` 。推荐转换为MindSpore模型。目前,支持转换为MindSpore Lite模型,但是该选项将会被废弃。有关详细信息,请参见 `模型类型 `_ 。 .. py:method:: set_config_info(section="", config_info=None) @@ -191,9 +191,9 @@ mindspore_lite.Converter 参数: - **section** (str,可选) - 配置参数的类别。配合 `config_info` 一起,设置confile的个别参数。例如:对于 `section` 是 ``"common_quant_param"`` , `config_info` 是{"quant_type":"WEIGHT_QUANT"}。默认值: ``""`` 。 - 有关训练后量化的配置参数,请参见 `训练后量化 `_ 。 + 有关训练后量化的配置参数,请参见 `训练后量化 `_ 。 - 有关扩展的配置参数,请参见 `扩展配置 `_ 。 + 有关扩展的配置参数,请参见 `扩展配置 `_ 。 - ``"common_quant_param"``:公共量化参数部分。 - ``"mixed_bit_weight_quant_param"``:混合位权重量化参数部分。 @@ -203,9 +203,9 @@ mindspore_lite.Converter - **config_info** (dict{str: str},可选) - 配置参数列表。配合 `section` 一起,设置confile的个别参数。例如:对于 `section` 是 ``"common_quant_param"`` , `config_info` 是{"quant_type":"WEIGHT_QUANT"}。默认值: ``None`` 。 - 有关训练后量化的配置参数,请参见 `训练后量化 `_ 。 + 有关训练后量化的配置参数,请参见 `训练后量化 `_ 。 - 有关扩展的配置参数,请参见 `扩展配置 `_ 。 + 有关扩展的配置参数,请参见 `扩展配置 `_ 。 异常: - **TypeError** - `section` 不是str类型。 diff --git a/docs/api/lite_api_python/mindspore_lite/mindspore_lite.Model.rst b/docs/api/lite_api_python/mindspore_lite/mindspore_lite.Model.rst index cf3da8f8869..66995f40f22 100644 --- a/docs/api/lite_api_python/mindspore_lite/mindspore_lite.Model.rst +++ b/docs/api/lite_api_python/mindspore_lite/mindspore_lite.Model.rst @@ -11,7 +11,7 @@ mindspore_lite.Model 参数: - **model_path** (str) - 定义输入模型文件的路径,例如:"/home/user/model.mindir"。模型应该使用.mindir作为后缀。 - - **model_type** (ModelType) - 定义输入模型文件的类型。选项有 ``ModelType::MINDIR`` 。有关详细信息,请参见 `模型类型 `_ 。 + - **model_type** (ModelType) - 定义输入模型文件的类型。选项有 ``ModelType::MINDIR`` 。有关详细信息,请参见 `模型类型 `_ 。 - **context** (Context,可选) - 定义上下文,用于在执行期间传递选项。默认值: ``None`` 。 ``None`` 表示设置target为cpu的Context。 - **config_path** (str,可选) - 定义配置文件的路径,用于在构建模型期间传递用户定义选项。在以下场景中,用户可能需要设置参数。例如:"/home/user/config.txt"。默认值: ``""`` 。 @@ -132,4 +132,4 @@ mindspore_lite.Model 教程样例: - `动态权重更新 - `_ + `_ diff --git a/mindspore/python/mindspore/_extends/parse/parser.py b/mindspore/python/mindspore/_extends/parse/parser.py index 67a1760407b..6709ade8087 100644 --- a/mindspore/python/mindspore/_extends/parse/parser.py +++ b/mindspore/python/mindspore/_extends/parse/parser.py @@ -486,7 +486,7 @@ def convert_class_to_function(cls_str, cls_obj): f"supported in 'construct' or @jit decorated function. Try to create {cls_str} " f"instances external such as initialized in the method '__init__' before assigning. " f"For more details, please refer to " - f"https://www.mindspore.cn/docs/zh-CN/r2.3.q1/design/dynamic_graph_and_static_graph.html \n") + f"https://www.mindspore.cn/docs/zh-CN/master/design/dynamic_graph_and_static_graph.html \n") return convert_class_to_function_map.get(cls_str) diff --git a/mindspore/python/mindspore/amp.py b/mindspore/python/mindspore/amp.py index a54151135b6..add1d68b023 100644 --- a/mindspore/python/mindspore/amp.py +++ b/mindspore/python/mindspore/amp.py @@ -132,7 +132,7 @@ def all_finite(inputs): Tutorial Examples: - `Automatic Mix Precision - Loss Scaling - `_ + `_ """ inputs = mutable(inputs) _check_overflow_mode = os.environ.get('MS_ASCEND_CHECK_OVERFLOW_MODE') @@ -148,7 +148,7 @@ class LossScaler(ABC): to scale and unscale the loss value and gradients to avoid overflow, `adjust` is used to update the loss scale value. - For more information, refer to the `tutorials `_. .. warning:: @@ -340,7 +340,7 @@ class DynamicLossScaler(LossScaler): Tutorial Examples: - `Automatic Mix Precision - Loss Scaling - `_ + `_ """ inputs = mutable(inputs) return _grad_scale_map(self.scale_value, inputs) @@ -357,7 +357,7 @@ class DynamicLossScaler(LossScaler): Tutorial Examples: - `Automatic Mix Precision - Loss Scaling - `_ + `_ """ inputs = mutable(inputs) return _grad_unscale_map(self.scale_value, inputs) @@ -371,7 +371,7 @@ class DynamicLossScaler(LossScaler): Tutorial Examples: - `Automatic Mix Precision - Loss Scaling - `_ + `_ """ one = ops.ones((), self.scale_value.dtype) scale_mul_factor = self.scale_value * self.scale_factor diff --git a/mindspore/python/mindspore/boost/boost_cell_wrapper.py b/mindspore/python/mindspore/boost/boost_cell_wrapper.py index 7f3118d8a10..fce9c32ccaa 100644 --- a/mindspore/python/mindspore/boost/boost_cell_wrapper.py +++ b/mindspore/python/mindspore/boost/boost_cell_wrapper.py @@ -136,7 +136,7 @@ class BoostTrainOneStepCell(TrainOneStepCell): >>> from mindspore import boost >>> from mindspore import nn >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss_fn = nn.SoftmaxCrossEntropyWithLogits() >>> optim = nn.Momentum(net.trainable_params(), learning_rate=0.1, momentum=0.9) diff --git a/mindspore/python/mindspore/boost/group_loss_scale_manager.py b/mindspore/python/mindspore/boost/group_loss_scale_manager.py index 7f7ba450482..e13cbe079c3 100644 --- a/mindspore/python/mindspore/boost/group_loss_scale_manager.py +++ b/mindspore/python/mindspore/boost/group_loss_scale_manager.py @@ -94,7 +94,7 @@ class GroupLossScaleManager(Cell): ... loss_scale_manager=loss_scale_manager, ... boost_level="O1", boost_config_dict=boost_config_dict) >>> # Create the dataset taking MNIST as an example. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/mnist.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/mnist.py >>> dataset = create_dataset() >>> model.train(2, dataset) """ diff --git a/mindspore/python/mindspore/common/api.py b/mindspore/python/mindspore/common/api.py index c16b675ba9a..a1b49fde445 100644 --- a/mindspore/python/mindspore/common/api.py +++ b/mindspore/python/mindspore/common/api.py @@ -625,8 +625,8 @@ def jit(fn=None, mode="PSJit", input_signature=None, hash_args=None, jit_config= fn (Function): The Python function that will be run as a graph. Default: ``None`` . mode (str): The type of jit used, the value of mode should be ``PIJit`` or ``PSJit``. Default: ``PSJit`` . - - `PSJit `_ : MindSpore GRAPH_MODE. - - `PIJit `_ : MindSpore PYNATIVE_MODE. + - `PSJit `_ : MindSpore GRAPH_MODE. + - `PIJit `_ : MindSpore PYNATIVE_MODE. input_signature (Tensor): The Tensor which describes the input arguments. The shape and dtype of the Tensor will be supplied to this function. If input_signature is specified, each input to `fn` must be a `Tensor`. diff --git a/mindspore/python/mindspore/common/dtype.py b/mindspore/python/mindspore/common/dtype.py index 192d5bc1d5f..2a7f6cbc68b 100644 --- a/mindspore/python/mindspore/common/dtype.py +++ b/mindspore/python/mindspore/common/dtype.py @@ -349,7 +349,7 @@ class QuantDtype(enum.Enum): An enum for quant datatype, contains `INT1` ~ `INT16`, `UINT1` ~ `UINT16`. `QuantDtype` is defined in - `dtype.py `_ , + `dtype.py `_ , use command below to import: .. code-block:: diff --git a/mindspore/python/mindspore/common/dump.py b/mindspore/python/mindspore/common/dump.py index 720bf201fff..221c263f070 100644 --- a/mindspore/python/mindspore/common/dump.py +++ b/mindspore/python/mindspore/common/dump.py @@ -27,7 +27,7 @@ def set_dump(target, enabled=True): `target` should be an instance of :class:`mindspore.nn.Cell` or :class:`mindspore.ops.Primitive` . Please note that this API takes effect only when Asynchronous Dump is enabled and the `dump_mode` field in dump config file is ``"2"`` . See the `dump document - `_ for details. + `_ for details. The default enabled status for a :class:`mindspore.nn.Cell` or :class:`mindspore.ops.Primitive` is False. @@ -61,7 +61,7 @@ def set_dump(target, enabled=True): .. note:: Please set environment variable `MINDSPORE_DUMP_CONFIG` to the dump config file and set `dump_mode` field in dump config file to 2 before running this example. - See `dump document `_ for details. + See `dump document `_ for details. >>> import numpy as np >>> import mindspore as ms diff --git a/mindspore/python/mindspore/common/initializer.py b/mindspore/python/mindspore/common/initializer.py index 0ba1eb95fd6..3ac184099e9 100644 --- a/mindspore/python/mindspore/common/initializer.py +++ b/mindspore/python/mindspore/common/initializer.py @@ -37,7 +37,7 @@ class Initializer: Initializers are intended to be used for delayed initialization in parallel mode rather than Tensor initialization. If you have to use Initializers to create a Tensor, :func:`mindspore.Tensor.init_data` should be followed in most of the cases. For more information, please refer to `mindspore.Tensor.init_data - `_ . Args: diff --git a/mindspore/python/mindspore/common/jit_config.py b/mindspore/python/mindspore/common/jit_config.py index 64286ac266b..1c62bc9b014 100644 --- a/mindspore/python/mindspore/common/jit_config.py +++ b/mindspore/python/mindspore/common/jit_config.py @@ -42,7 +42,7 @@ class JitConfig: The value must be ``"STRICT"`` , ``"LAX"`` or ``""`` . Default to an empty string, which means that this JitConfig configuration will be ignored and the jit_syntax_level of ms.context will be used. For more details about ms.context, refer to - `set_context `_ . + `set_context `_ . Default: ``""`` . - ``"STRICT"``: Only basic syntax is supported, and execution performance is optimal. Can be used for MindIR @@ -66,7 +66,7 @@ class JitConfig: >>> jitconfig = JitConfig(jit_level="O1") >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> >>> net.set_jit_config(jitconfig) diff --git a/mindspore/python/mindspore/common/parameter.py b/mindspore/python/mindspore/common/parameter.py index fdf82cb4b3a..e02e83568d0 100644 --- a/mindspore/python/mindspore/common/parameter.py +++ b/mindspore/python/mindspore/common/parameter.py @@ -364,7 +364,7 @@ class Parameter(Tensor_): Tutorial Examples: - `Parameter Server Mode - `_ + `_ """ if not _is_ps_mode() or not (_is_role_worker() or _is_role_pserver() or _is_role_sched()): raise RuntimeError("Must complete following two steps before calling set_param_ps: \n" @@ -1023,7 +1023,7 @@ class ParameterTuple(tuple): Tutorial Examples: - `Cell and Parameter - Parameter Tuple - `_ + `_ """ Validator.check_str_by_regular(prefix) new = [] diff --git a/mindspore/python/mindspore/common/sparse_tensor.py b/mindspore/python/mindspore/common/sparse_tensor.py index 8dd70c77d46..715e57a9ca0 100644 --- a/mindspore/python/mindspore/common/sparse_tensor.py +++ b/mindspore/python/mindspore/common/sparse_tensor.py @@ -226,7 +226,7 @@ class COOTensor(COOTensor_): Common arithmetic operations include: addition (+), subtraction (-), multiplication (*), and division (/). For details about operations supported by `COOTensor`, see - `operators `_. + `operators `_. .. warning:: - This is an experimental API that is subject to change or deletion. @@ -653,7 +653,7 @@ class CSRTensor(CSRTensor_): Common arithmetic operations include: addition (+), subtraction (-), multiplication (*), and division (/). For details about operations supported by `CSRTensor`, see - `operators `_. + `operators `_. .. warning:: - This is an experimental API that is subjected to change. diff --git a/mindspore/python/mindspore/common/tensor.py b/mindspore/python/mindspore/common/tensor.py index cd998f285a3..292ac8a0dce 100644 --- a/mindspore/python/mindspore/common/tensor.py +++ b/mindspore/python/mindspore/common/tensor.py @@ -83,11 +83,11 @@ def tensor(input_data=None, dtype=None, shape=None, init=None, internal=False, c based on the `dtype` argument. Please refer to `Creating and Using Tensor - `_ . + `_ . The difference between it and the Tensor class is that it adds `Annotation - `_ + `_ which can prevent the generation of AnyType compared to the Tensor class. The arguments and return values are the same as the Tensor class. Also see: :class:`mindspore.Tensor`. @@ -2720,7 +2720,7 @@ class Tensor(Tensor_, metaclass=_TensorMeta): opt_shard_group(str): Optimizer shard group which is used in auto or semi auto parallel mode to get one shard of a parameter's slice. For more information about optimizer parallel, please refer to: `Optimizer Parallel - `_. + `_. Default: ``None``. Returns: diff --git a/mindspore/python/mindspore/communication/__init__.py b/mindspore/python/mindspore/communication/__init__.py index 05f03281dbd..0a4461fbf90 100644 --- a/mindspore/python/mindspore/communication/__init__.py +++ b/mindspore/python/mindspore/communication/__init__.py @@ -19,14 +19,14 @@ Note that the APIs in the following list need to preset communication environmen For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the `rank table Startup -`_ +`_ for more details. For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup -`_ . +`_ . For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster Startup -`_ . +`_ . """ from mindspore.communication.management import GlobalComm, init, release, get_rank, \ diff --git a/mindspore/python/mindspore/communication/management.py b/mindspore/python/mindspore/communication/management.py index 43657077bf0..e2537627d22 100755 --- a/mindspore/python/mindspore/communication/management.py +++ b/mindspore/python/mindspore/communication/management.py @@ -138,14 +138,14 @@ def init(backend_name=None): For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the `rank table Startup - `_ + `_ for more details. For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup - `_ . + `_ . For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster - Startup `_ . + Startup `_ . >>> from mindspore.communication import init >>> init() @@ -226,14 +226,14 @@ def release(): For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the `rank table Startup - `_ + `_ for more details. For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup - `_ . + `_ . For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster - Startup `_ . + Startup `_ . >>> from mindspore.communication import init, release >>> init() @@ -270,14 +270,14 @@ def get_rank(group=GlobalComm.WORLD_COMM_GROUP): For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the `rank table Startup - `_ + `_ for more details. For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup - `_ . + `_ . For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster - Startup `_ . + Startup `_ . >>> from mindspore.communication import init, get_rank >>> init() @@ -320,14 +320,14 @@ def get_local_rank(group=GlobalComm.WORLD_COMM_GROUP): For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the `rank table Startup - `_ + `_ for more details. For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup - `_ . + `_ . For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster - Startup `_ . + Startup `_ . >>> import mindspore as ms >>> from mindspore.communication import init, get_rank, get_local_rank @@ -373,14 +373,14 @@ def get_group_size(group=GlobalComm.WORLD_COMM_GROUP): For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the `rank table Startup - `_ + `_ for more details. For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup - `_ . + `_ . For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster - Startup `_ . + Startup `_ . >>> import mindspore as ms >>> from mindspore.communication import init, get_group_size @@ -425,14 +425,14 @@ def get_local_rank_size(group=GlobalComm.WORLD_COMM_GROUP): For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the `rank table Startup - `_ + `_ for more details. For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup - `_ . + `_ . For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster - Startup `_ . + Startup `_ . >>> import mindspore as ms >>> from mindspore.communication import init, get_local_rank_size @@ -480,14 +480,14 @@ def get_world_rank_from_group_rank(group, group_rank_id): For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the `rank table Startup - `_ + `_ for more details. For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup - `_ + `_ For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster - Startup `_ . + Startup `_ . >>> import mindspore as ms >>> from mindspore import set_context @@ -539,14 +539,14 @@ def get_group_rank_from_world_rank(world_rank_id, group): For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the `rank table Startup - `_ + `_ for more details. For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup - `_ + `_ For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster - Startup `_ . + Startup `_ . >>> import mindspore as ms >>> from mindspore import set_context @@ -595,14 +595,14 @@ def create_group(group, rank_ids): For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the `rank table Startup - `_ + `_ for more details. For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup - `_ . + `_ . For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster - Startup `_ . + Startup `_ . >>> import mindspore as ms >>> from mindspore import set_context @@ -648,14 +648,14 @@ def destroy_group(group): For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the `rank table startup - `_ + `_ for more details. For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun startup - `_ . + `_ . For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster - Startup `_ . + Startup `_ . >>> import mindspore as ms >>> from mindspore import set_context diff --git a/mindspore/python/mindspore/context.py b/mindspore/python/mindspore/context.py index dd6f917772c..07fec3b7ddf 100644 --- a/mindspore/python/mindspore/context.py +++ b/mindspore/python/mindspore/context.py @@ -1289,7 +1289,7 @@ def set_context(**kwargs): If enable_graph_kernel is set to ``True`` , acceleration can be enabled. For details of graph kernel fusion, please check `Enabling Graph Kernel Fusion - `_. + `_. graph_kernel_flags (str): Optimization options of graph kernel fusion, and the priority is higher when it conflicts with enable_graph_kernel. Only for experienced users. @@ -1434,7 +1434,7 @@ def set_context(**kwargs): - parallel_speed_up_json_path(Union[str, None]): The path to the parallel speed up json file, configuration can refer to `parallel_speed_up.json - `_ . + `_ . If its value is None or '', it does not take effect. Default None. - recompute_comm_overlap (bool): Enable overlap between recompute ops and communication ops if True. @@ -1445,11 +1445,11 @@ def set_context(**kwargs): Default: False. - enable_grad_comm_opt (bool): Enable overlap between dx ops and data parallel communication ops if True. Currently, do not support - `LazyInline ` + `LazyInline ` Default: False. - enable_opt_shard_comm_opt (bool): Enable overlap between forward ops and optimizer parallel allgather communication if True. Currently, do not support - `LazyInline ` + `LazyInline ` Default: False. - compute_communicate_fusion_level (int): Enable the fusion between compute and communicate. Default: ``0``. diff --git a/mindspore/python/mindspore/dataset/__init__.py b/mindspore/python/mindspore/dataset/__init__.py index 9722b71add9..461f794b221 100644 --- a/mindspore/python/mindspore/dataset/__init__.py +++ b/mindspore/python/mindspore/dataset/__init__.py @@ -21,7 +21,7 @@ Besides, this module provides APIs to sample data while loading. We can enable cache in most of the dataset with its key arguments 'cache'. Please notice that cache is not supported on Windows platform yet. Do not use it while loading and processing data on Windows. More introductions and limitations -can refer `Single-Node Tensor Cache `_ . +can refer `Single-Node Tensor Cache `_ . Common imported modules in corresponding API examples are as follows: @@ -55,11 +55,11 @@ The specific steps are as follows: - Dataset operation: The user uses the dataset object method `.shuffle` / `.filter` / `.skip` / `.split` / `.take` / ... to further shuffle, filter, skip, and obtain the maximum number of samples of datasets; - Dataset sample transform operation: The user can add data transform operations - ( `vision transform `_ , - `NLP transform `_ , - `audio transform `_ ) to the map operation to perform transformations. During data preprocessing, multiple map operations can be defined to perform different transform operations to different fields. The data transform operation can also be a @@ -73,7 +73,7 @@ Quick start of Dataset Pipeline ------------------------------- For a quick start of using Dataset Pipeline, download `Load & Process Data With Dataset Pipeline -`_ +`_ to local and run in sequence. """ diff --git a/mindspore/python/mindspore/dataset/audio/__init__.py b/mindspore/python/mindspore/dataset/audio/__init__.py index 44de7ad4dd8..44602d19d8d 100644 --- a/mindspore/python/mindspore/dataset/audio/__init__.py +++ b/mindspore/python/mindspore/dataset/audio/__init__.py @@ -40,10 +40,10 @@ Descriptions of common data processing terms are as follows: The data transform operation can be executed in the data processing pipeline or in the eager mode: - Pipeline mode is generally used to process big datasets. Examples refer to - `introduction to data processing pipeline `_ . - Eager mode is more like a function call to process data. Examples refer to - `Lightweight Data Processing `_ . + `Lightweight Data Processing `_ . """ from __future__ import absolute_import diff --git a/mindspore/python/mindspore/dataset/audio/transforms.py b/mindspore/python/mindspore/dataset/audio/transforms.py index 505636ec017..acf3b15b613 100644 --- a/mindspore/python/mindspore/dataset/audio/transforms.py +++ b/mindspore/python/mindspore/dataset/audio/transforms.py @@ -111,7 +111,7 @@ class AllpassBiquad(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_allpass_biquad @@ -184,7 +184,7 @@ class AmplitudeToDB(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_amplitude_to_db @@ -236,7 +236,7 @@ class Angle(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ def parse(self): @@ -300,7 +300,7 @@ class BandBiquad(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_band_biquad @@ -378,7 +378,7 @@ class BandpassBiquad(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_bandpass_biquad @@ -451,7 +451,7 @@ class BandrejectBiquad(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_bandreject_biquad @@ -522,7 +522,7 @@ class BassBiquad(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_bass_biquad @@ -585,7 +585,7 @@ class Biquad(TensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_biquad @@ -644,7 +644,7 @@ class ComplexNorm(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_complex_norm @@ -721,7 +721,7 @@ class ComputeDeltas(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_compute_deltas @@ -781,7 +781,7 @@ class Contrast(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_contrast @@ -831,7 +831,7 @@ class DBToAmplitude(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_db_to_amplitude @@ -885,7 +885,7 @@ class DCShift(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_dc_shift @@ -938,7 +938,7 @@ class DeemphBiquad(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_deemph_biquad @@ -1004,7 +1004,7 @@ class DetectPitchFrequency(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_detect_pitch_frequency @@ -1071,7 +1071,7 @@ class Dither(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_dither @@ -1130,7 +1130,7 @@ class EqualizerBiquad(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_equalizer_biquad @@ -1202,7 +1202,7 @@ class Fade(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_fade @@ -1260,7 +1260,7 @@ class Filtfilt(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_lfilter @@ -1345,7 +1345,7 @@ class Flanger(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_flanger @@ -1422,7 +1422,7 @@ class FrequencyMasking(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ .. image:: frequency_masking_original.png @@ -1478,7 +1478,7 @@ class Gain(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_gain @@ -1559,7 +1559,7 @@ class GriffinLim(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_griffin_lim @@ -1627,7 +1627,7 @@ class HighpassBiquad(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_highpass_biquad @@ -1708,7 +1708,7 @@ class InverseMelScale(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_inverse_mel_scale @@ -1803,7 +1803,7 @@ class InverseSpectrogram(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_inverse_spectrogram @@ -1901,7 +1901,7 @@ class LFCC(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_lfcc @@ -1984,7 +1984,7 @@ class LFilter(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_lfilter @@ -2053,7 +2053,7 @@ class LowpassBiquad(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_lowpass_biquad @@ -2104,7 +2104,7 @@ class Magphase(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_magphase @@ -2157,7 +2157,7 @@ class MaskAlongAxis(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_mask_along_axis @@ -2219,7 +2219,7 @@ class MaskAlongAxisIID(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_mask_along_axis_iid @@ -2296,7 +2296,7 @@ class MelScale(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_mel_scale @@ -2411,7 +2411,7 @@ class MelSpectrogram(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_mel_spectrogram @@ -2508,7 +2508,7 @@ class MFCC(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_mfcc @@ -2587,7 +2587,7 @@ class MuLawDecoding(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_mu_law_coding @@ -2636,7 +2636,7 @@ class MuLawEncoding(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_mu_law_coding @@ -2693,7 +2693,7 @@ class Overdrive(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_overdrive @@ -2766,7 +2766,7 @@ class Phaser(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_phaser @@ -2826,7 +2826,7 @@ class PhaseVocoder(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_phase_vocoder @@ -2894,7 +2894,7 @@ class PitchShift(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_pitch_shift @@ -2978,7 +2978,7 @@ class Resample(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_resample @@ -3038,7 +3038,7 @@ class RiaaBiquad(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_riaa_biquad @@ -3096,7 +3096,7 @@ class SlidingWindowCmn(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_sliding_window_cmn @@ -3173,7 +3173,7 @@ class SpectralCentroid(TensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_spectral_centroid @@ -3257,7 +3257,7 @@ class Spectrogram(TensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_spectrogram @@ -3335,7 +3335,7 @@ class TimeMasking(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ .. image:: time_masking_original.png @@ -3404,7 +3404,7 @@ class TimeStretch(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ .. image:: time_stretch_rate1.5.png @@ -3475,7 +3475,7 @@ class TrebleBiquad(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_treble_biquad @@ -3592,7 +3592,7 @@ class Vad(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_vad @@ -3677,7 +3677,7 @@ class Vol(AudioTensorOperation): Tutorial Examples: - `Illustration of audio transforms - `_ + `_ """ @check_vol diff --git a/mindspore/python/mindspore/dataset/callback/ds_callback.py b/mindspore/python/mindspore/dataset/callback/ds_callback.py index 591b39b558a..b751f02d96f 100644 --- a/mindspore/python/mindspore/dataset/callback/ds_callback.py +++ b/mindspore/python/mindspore/dataset/callback/ds_callback.py @@ -160,7 +160,7 @@ class WaitedDSCallback(Callback, DSCallback): r""" Abstract base class used to build dataset callback classes that are synchronized with the training callback class `mindspore.train.Callback \ - `_ . It can be used to execute a custom callback method before a step or an epoch, such as @@ -171,7 +171,7 @@ class WaitedDSCallback(Callback, DSCallback): `device_number` , `list_callback` , `cur_epoch_num` , `cur_step_num` , `dataset_sink_mode` , `net_outputs` , etc., see `mindspore.train.Callback \ - `_ . Users can obtain the dataset pipeline context through `ds_run_context` , including diff --git a/mindspore/python/mindspore/dataset/engine/cache_client.py b/mindspore/python/mindspore/dataset/engine/cache_client.py index d8362961d4c..d6fbd966cd6 100644 --- a/mindspore/python/mindspore/dataset/engine/cache_client.py +++ b/mindspore/python/mindspore/dataset/engine/cache_client.py @@ -27,7 +27,7 @@ class DatasetCache: A client to interface with tensor caching service. For details, please check - `Tutorial `_ . + `Tutorial `_ . Args: session_id (int): A user assigned session id for the current pipeline. diff --git a/mindspore/python/mindspore/dataset/engine/datasets.py b/mindspore/python/mindspore/dataset/engine/datasets.py index 3dacf87faca..e7dd7acc2e3 100644 --- a/mindspore/python/mindspore/dataset/engine/datasets.py +++ b/mindspore/python/mindspore/dataset/engine/datasets.py @@ -861,11 +861,11 @@ class Dataset: `output_columns` , and if not specified, the column name of output column is same as that of `input_columns` . - If you use transformations ( - `vision transform `_ , - `nlp transform `_ , - `audio transform `_ ) provided by mindspore dataset, please use the following parameters: diff --git a/mindspore/python/mindspore/dataset/engine/datasets_audio.py b/mindspore/python/mindspore/dataset/engine/datasets_audio.py index cc2bf305970..e7dd80f8898 100644 --- a/mindspore/python/mindspore/dataset/engine/datasets_audio.py +++ b/mindspore/python/mindspore/dataset/engine/datasets_audio.py @@ -63,7 +63,7 @@ class CMUArcticDataset(MappableDataset, AudioBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None``, will use ``0``. This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None``, which means no cache is used. Raises: @@ -77,7 +77,7 @@ class CMUArcticDataset(MappableDataset, AudioBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - Not support :class:`mindspore.dataset.PKSampler` for `sampler` parameter yet. @@ -180,7 +180,7 @@ class GTZANDataset(MappableDataset, AudioBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -194,7 +194,7 @@ class GTZANDataset(MappableDataset, AudioBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - Not support :class:`mindspore.dataset.PKSampler` for `sampler` parameter yet. @@ -298,7 +298,7 @@ class LibriTTSDataset(MappableDataset, AudioBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -312,7 +312,7 @@ class LibriTTSDataset(MappableDataset, AudioBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - Not support :class:`mindspore.dataset.PKSampler` for `sampler` parameter yet. @@ -425,7 +425,7 @@ class LJSpeechDataset(MappableDataset, AudioBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -439,7 +439,7 @@ class LJSpeechDataset(MappableDataset, AudioBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -548,7 +548,7 @@ class SpeechCommandsDataset(MappableDataset, AudioBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -562,7 +562,7 @@ class SpeechCommandsDataset(MappableDataset, AudioBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -661,7 +661,7 @@ class TedliumDataset(MappableDataset, AudioBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -675,7 +675,7 @@ class TedliumDataset(MappableDataset, AudioBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -841,7 +841,7 @@ class YesNoDataset(MappableDataset, AudioBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -855,7 +855,7 @@ class YesNoDataset(MappableDataset, AudioBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler diff --git a/mindspore/python/mindspore/dataset/engine/datasets_standard_format.py b/mindspore/python/mindspore/dataset/engine/datasets_standard_format.py index f4d1dd7966b..50f9b22164e 100644 --- a/mindspore/python/mindspore/dataset/engine/datasets_standard_format.py +++ b/mindspore/python/mindspore/dataset/engine/datasets_standard_format.py @@ -77,7 +77,7 @@ class CSVDataset(SourceDataset, UnionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None``. This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None``, which means no cache is used. Raises: @@ -156,7 +156,7 @@ class MindDataset(MappableDataset, UnionBaseDataset): num_samples (int, optional): The number of samples to be included in the dataset. Default: ``None`` , all samples. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -307,7 +307,7 @@ class TFRecordDataset(SourceDataset, UnionBaseDataset): When `compression_type` is not ``None``, and `num_samples` or numRows (parsed from `schema` ) is provided, `shard_equal_rows` will be implied as ``True``. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. compression_type (str, optional): The type of compression used for all files, must be either ``''``, ``'GZIP'``, or ``'ZLIB'``. Default: ``None`` , as in empty string. It is highly recommended to diff --git a/mindspore/python/mindspore/dataset/engine/datasets_text.py b/mindspore/python/mindspore/dataset/engine/datasets_text.py index 2d432c018e7..f259512b120 100644 --- a/mindspore/python/mindspore/dataset/engine/datasets_text.py +++ b/mindspore/python/mindspore/dataset/engine/datasets_text.py @@ -70,7 +70,7 @@ class AGNewsDataset(SourceDataset, TextBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . This argument can only be specified when `num_shards` is also specified. Default: ``None``. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None``, which means no cache is used. Raises: @@ -81,7 +81,7 @@ class AGNewsDataset(SourceDataset, TextBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Examples: >>> import mindspore.dataset as ds @@ -178,7 +178,7 @@ class AmazonReviewDataset(SourceDataset, TextBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -189,7 +189,7 @@ class AmazonReviewDataset(SourceDataset, TextBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Examples: >>> import mindspore.dataset as ds @@ -276,7 +276,7 @@ class CLUEDataset(SourceDataset, TextBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. The generated dataset with different task setting has different output columns: @@ -433,7 +433,7 @@ class CLUEDataset(SourceDataset, TextBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Examples: >>> import mindspore.dataset as ds @@ -522,7 +522,7 @@ class CoNLL2000Dataset(SourceDataset, TextBaseDataset): Default: ``None`` , will use global default workers(8), it can be set by :func:`mindspore.dataset.config.set_num_parallel_workers` . cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -533,7 +533,7 @@ class CoNLL2000Dataset(SourceDataset, TextBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Examples: >>> import mindspore.dataset as ds @@ -621,7 +621,7 @@ class DBpediaDataset(SourceDataset, TextBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -633,7 +633,7 @@ class DBpediaDataset(SourceDataset, TextBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Examples: >>> import mindspore.dataset as ds @@ -720,7 +720,7 @@ class EnWik9Dataset(SourceDataset, TextBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -731,7 +731,7 @@ class EnWik9Dataset(SourceDataset, TextBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Examples: >>> import mindspore.dataset as ds @@ -808,7 +808,7 @@ class IMDBDataset(MappableDataset, TextBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -822,7 +822,7 @@ class IMDBDataset(MappableDataset, TextBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The shape of the test column. @@ -947,7 +947,7 @@ class IWSLT2016Dataset(SourceDataset, TextBaseDataset): Default: ``None`` , will use global default workers(8), it can be set by :func:`mindspore.dataset.config.set_num_parallel_workers` . cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -958,7 +958,7 @@ class IWSLT2016Dataset(SourceDataset, TextBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Examples: >>> import mindspore.dataset as ds @@ -1079,7 +1079,7 @@ class IWSLT2017Dataset(SourceDataset, TextBaseDataset): Default: ``None`` , will use global default workers(8), it can be set by :func:`mindspore.dataset.config.set_num_parallel_workers` . cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -1090,7 +1090,7 @@ class IWSLT2017Dataset(SourceDataset, TextBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Examples: >>> import mindspore.dataset as ds @@ -1184,7 +1184,7 @@ class Multi30kDataset(SourceDataset, TextBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -1199,7 +1199,7 @@ class Multi30kDataset(SourceDataset, TextBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Examples: >>> import mindspore.dataset as ds @@ -1293,7 +1293,7 @@ class PennTreebankDataset(SourceDataset, TextBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -1304,7 +1304,7 @@ class PennTreebankDataset(SourceDataset, TextBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Examples: >>> import mindspore.dataset as ds @@ -1395,7 +1395,7 @@ class SogouNewsDataset(SourceDataset, TextBaseDataset): Default: ``None`` , will use global default workers(8), it can be set by :func:`mindspore.dataset.config.set_num_parallel_workers` . cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -1406,7 +1406,7 @@ class SogouNewsDataset(SourceDataset, TextBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Examples: >>> import mindspore.dataset as ds @@ -1493,7 +1493,7 @@ class SQuADDataset(SourceDataset, TextBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -1505,7 +1505,7 @@ class SQuADDataset(SourceDataset, TextBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Examples: >>> import mindspore.dataset as ds @@ -1611,7 +1611,7 @@ class SST2Dataset(SourceDataset, TextBaseDataset): shard_id (int, optional): The shard ID within `num_shards`. This argument can only be specified when `num_shards` is also specified. Default: ``None`` . cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -1623,7 +1623,7 @@ class SST2Dataset(SourceDataset, TextBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Examples: >>> import mindspore.dataset as ds @@ -1714,7 +1714,7 @@ class TextFileDataset(SourceDataset, TextBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -1726,7 +1726,7 @@ class TextFileDataset(SourceDataset, TextBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Examples: >>> import mindspore.dataset as ds @@ -1781,7 +1781,7 @@ class UDPOSDataset(SourceDataset, TextBaseDataset): Default: ``None`` , will use global default workers(8), it can be set by :func:`mindspore.dataset.config.set_num_parallel_workers` . cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -1792,7 +1792,7 @@ class UDPOSDataset(SourceDataset, TextBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Examples: >>> import mindspore.dataset as ds @@ -1864,7 +1864,7 @@ class WikiTextDataset(SourceDataset, TextBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -1877,7 +1877,7 @@ class WikiTextDataset(SourceDataset, TextBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ About WikiTextDataset dataset: @@ -1961,7 +1961,7 @@ class YahooAnswersDataset(SourceDataset, TextBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -1973,7 +1973,7 @@ class YahooAnswersDataset(SourceDataset, TextBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Examples: >>> import mindspore.dataset as ds @@ -2064,7 +2064,7 @@ class YelpReviewDataset(SourceDataset, TextBaseDataset): Default: ``None`` , will use global default workers(8), it can be set by :func:`mindspore.dataset.config.set_num_parallel_workers` . cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -2075,7 +2075,7 @@ class YelpReviewDataset(SourceDataset, TextBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Examples: >>> import mindspore.dataset as ds diff --git a/mindspore/python/mindspore/dataset/engine/datasets_user_defined.py b/mindspore/python/mindspore/dataset/engine/datasets_user_defined.py index 8ab322b771d..95a57b7f91f 100644 --- a/mindspore/python/mindspore/dataset/engine/datasets_user_defined.py +++ b/mindspore/python/mindspore/dataset/engine/datasets_user_defined.py @@ -670,7 +670,7 @@ class GeneratorDataset(MappableDataset, UnionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - If you configure `python_multiprocessing=True` (Default: ``True`` ) and `num_parallel_workers>1` @@ -1012,7 +1012,7 @@ class NumpySlicesDataset(GeneratorDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Examples: >>> import mindspore.dataset as ds @@ -1078,7 +1078,7 @@ class PaddedDataset(GeneratorDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Examples: >>> import mindspore.dataset as ds diff --git a/mindspore/python/mindspore/dataset/engine/datasets_vision.py b/mindspore/python/mindspore/dataset/engine/datasets_vision.py index a641c6db2e5..5ed60ac49cb 100644 --- a/mindspore/python/mindspore/dataset/engine/datasets_vision.py +++ b/mindspore/python/mindspore/dataset/engine/datasets_vision.py @@ -156,7 +156,7 @@ class Caltech101Dataset(GeneratorDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -287,7 +287,7 @@ class Caltech256Dataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -302,7 +302,7 @@ class Caltech256Dataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -399,7 +399,7 @@ class CelebADataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. decrypt (callable, optional): Image decryption function, which accepts the path of the encrypted image file and returns the decrypted bytes data. Default: ``None`` , no decryption. @@ -416,7 +416,7 @@ class CelebADataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -552,7 +552,7 @@ class Cifar10Dataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -567,7 +567,7 @@ class Cifar10Dataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -666,7 +666,7 @@ class Cifar100Dataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -681,7 +681,7 @@ class Cifar100Dataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -780,7 +780,7 @@ class CityscapesDataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -798,7 +798,7 @@ class CityscapesDataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -931,7 +931,7 @@ class CocoDataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. extra_metadata(bool, optional): Flag to add extra meta-data to row. If True, an additional column will be output at the end :py:obj:`[_meta-filename, dtype=string]` . Default: ``False``. @@ -994,7 +994,7 @@ class CocoDataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - Column '[_meta-filename, dtype=string]' won't be output unless an explicit rename dataset op is added @@ -1173,7 +1173,7 @@ class DIV2KDataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -1194,7 +1194,7 @@ class DIV2KDataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -1341,7 +1341,7 @@ class EMnistDataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -1353,7 +1353,7 @@ class EMnistDataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -1456,7 +1456,7 @@ class FakeImageDataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -1469,7 +1469,7 @@ class FakeImageDataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -1527,7 +1527,7 @@ class FashionMnistDataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -1541,7 +1541,7 @@ class FashionMnistDataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -1632,7 +1632,7 @@ class FlickrDataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -1648,7 +1648,7 @@ class FlickrDataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -1871,7 +1871,7 @@ class Flowers102Dataset(GeneratorDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -2005,7 +2005,7 @@ class Food101Dataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . This argument can only be specified when `num_shards` is also specified. Default: ``None`` . cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -2021,7 +2021,7 @@ class Food101Dataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -2126,7 +2126,7 @@ class ImageFolderDataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. decrypt (callable, optional): Image decryption function, which accepts the path of the encrypted image file and returns the decrypted bytes data. Default: ``None`` , no decryption. @@ -2143,7 +2143,7 @@ class ImageFolderDataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The shape of the image column is [image_size] if `decode` flag is ``False``, or [H,W,C] otherwise. @@ -2270,7 +2270,7 @@ class KITTIDataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards`. Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -2283,7 +2283,7 @@ class KITTIDataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -2390,7 +2390,7 @@ class KMnistDataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -2404,7 +2404,7 @@ class KMnistDataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -2500,7 +2500,7 @@ class LFWDataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards`. Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -2513,7 +2513,7 @@ class LFWDataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -2639,7 +2639,7 @@ class LSUNDataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards`. Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -2653,7 +2653,7 @@ class LSUNDataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -2760,7 +2760,7 @@ class ManifestDataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -2775,7 +2775,7 @@ class ManifestDataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - If `decode` is ``False`` , the "image" column will get the 1D raw bytes of the image. @@ -2881,7 +2881,7 @@ class MnistDataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -2896,7 +2896,7 @@ class MnistDataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -2986,7 +2986,7 @@ class OmniglotDataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards`. Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -2999,7 +2999,7 @@ class OmniglotDataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -3106,7 +3106,7 @@ class PhotoTourDataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -3124,7 +3124,7 @@ class PhotoTourDataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -3234,7 +3234,7 @@ class Places365Dataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -3249,7 +3249,7 @@ class Places365Dataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -3356,7 +3356,7 @@ class QMnistDataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -3370,7 +3370,7 @@ class QMnistDataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -3454,7 +3454,7 @@ class RandomDataset(SourceDataset, VisionBaseDataset): Default: ``None`` , will use global default workers(8), it can be set by :func:`mindspore.dataset.config.set_num_parallel_workers` . cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. shuffle (bool, optional): Whether or not to perform shuffle on the dataset. Default: ``None`` , expected order behavior shown in the table below. @@ -3477,7 +3477,7 @@ class RandomDataset(SourceDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Examples: >>> from mindspore import dtype as mstype @@ -3539,7 +3539,7 @@ class RenderedSST2Dataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . This argument can only be specified when `num_shards` is also specified. Default: ``None`` . cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -3554,7 +3554,7 @@ class RenderedSST2Dataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -3750,7 +3750,7 @@ class SBDataset(GeneratorDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -3847,7 +3847,7 @@ class SBUDataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -3861,7 +3861,7 @@ class SBUDataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -3944,7 +3944,7 @@ class SemeionDataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -3958,7 +3958,7 @@ class SemeionDataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -4054,7 +4054,7 @@ class STL10Dataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -4069,7 +4069,7 @@ class STL10Dataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -4167,7 +4167,7 @@ class SUN397Dataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . This argument can only be specified when `num_shards` is also specified. Default: ``None`` . cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -4181,7 +4181,7 @@ class SUN397Dataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -4340,7 +4340,7 @@ class SVHNDataset(GeneratorDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler @@ -4428,7 +4428,7 @@ class USPSDataset(SourceDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -4441,7 +4441,7 @@ class USPSDataset(SourceDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Examples: >>> import mindspore.dataset as ds @@ -4536,7 +4536,7 @@ class VOCDataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. extra_metadata(bool, optional): Flag to add extra meta-data to row. If True, an additional column named :py:obj:`[_meta-filename, dtype=string]` will be output at the end. Default: ``False``. @@ -4560,7 +4560,7 @@ class VOCDataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - Column '[_meta-filename, dtype=string]' won't be output unless an explicit rename dataset op @@ -4718,7 +4718,7 @@ class WIDERFaceDataset(MappableDataset, VisionBaseDataset): shard_id (int, optional): The shard ID within `num_shards` . Default: ``None`` . This argument can only be specified when `num_shards` is also specified. cache (DatasetCache, optional): Use tensor caching service to speed up dataset processing. More details: - `Single-Node Data Cache `_ . + `Single-Node Data Cache `_ . Default: ``None`` , which means no cache is used. Raises: @@ -4735,7 +4735,7 @@ class WIDERFaceDataset(MappableDataset, VisionBaseDataset): Tutorial Examples: - `Load & Process Data With Dataset Pipeline - `_ + `_ Note: - The parameters `num_samples` , `shuffle` , `num_shards` , `shard_id` can be used to control the sampler diff --git a/mindspore/python/mindspore/dataset/text/__init__.py b/mindspore/python/mindspore/dataset/text/__init__.py index 0b30bfa423e..f658759aa3d 100644 --- a/mindspore/python/mindspore/dataset/text/__init__.py +++ b/mindspore/python/mindspore/dataset/text/__init__.py @@ -25,7 +25,7 @@ Common imported modules in corresponding API examples are as follows: import mindspore.dataset.text as text See `Text Transforms -`_ tutorial for more details. +`_ tutorial for more details. Descriptions of common data processing terms are as follows: @@ -35,10 +35,10 @@ Descriptions of common data processing terms are as follows: The data transform operation can be executed in the data processing pipeline or in the eager mode: - Pipeline mode is generally used to process big datasets. Examples refer to - `introduction to data processing pipeline `_ . - Eager mode is more like a function call to process data. Examples refer to - `Lightweight Data Processing `_ . + `Lightweight Data Processing `_ . """ import platform diff --git a/mindspore/python/mindspore/dataset/text/transforms.py b/mindspore/python/mindspore/dataset/text/transforms.py index e9150b6dc9f..8d444850147 100644 --- a/mindspore/python/mindspore/dataset/text/transforms.py +++ b/mindspore/python/mindspore/dataset/text/transforms.py @@ -142,7 +142,7 @@ class AddToken(TextTensorOperation): Tutorial Examples: - `Illustration of text transforms - `_ + `_ """ @check_add_token @@ -191,8 +191,8 @@ class JiebaTokenizer(TextTensorOperation): >>> >>> # 1) If with_offsets=False, return one data column {["text", dtype=str]} >>> # The paths to jieba_hmm_file and jieba_mp_file can be downloaded directly from the mindspore repository. - >>> # Refer to https://gitee.com/mindspore/mindspore/blob/r2.3.q1/tests/ut/data/dataset/jiebadict/hmm_model.utf8 - >>> # and https://gitee.com/mindspore/mindspore/blob/r2.3.q1/tests/ut/data/dataset/jiebadict/jieba.dict.utf8 + >>> # Refer to https://gitee.com/mindspore/mindspore/blob/master/tests/ut/data/dataset/jiebadict/hmm_model.utf8 + >>> # and https://gitee.com/mindspore/mindspore/blob/master/tests/ut/data/dataset/jiebadict/jieba.dict.utf8 >>> jieba_hmm_file = "tests/ut/data/dataset/jiebadict/hmm_model.utf8" >>> jieba_mp_file = "tests/ut/data/dataset/jiebadict/jieba.dict.utf8" >>> tokenizer_op = text.JiebaTokenizer(jieba_hmm_file, jieba_mp_file, mode=JiebaMode.MP, with_offsets=False) @@ -219,7 +219,7 @@ class JiebaTokenizer(TextTensorOperation): Tutorial Examples: - `Illustration of text transforms - `_ + `_ """ @check_jieba_init @@ -409,7 +409,7 @@ class Lookup(TextTensorOperation): Tutorial Examples: - `Illustration of text transforms - `_ + `_ """ @check_lookup @@ -480,7 +480,7 @@ class Ngram(TextTensorOperation): Tutorial Examples: - `Illustration of text transforms - `_ + `_ """ @check_ngram @@ -531,7 +531,7 @@ class PythonTokenizer: Tutorial Examples: - `Illustration of text transforms - `_ + `_ """ @check_python_tokenizer @@ -587,7 +587,7 @@ class SentencePieceTokenizer(TextTensorOperation): >>> # Use the transform in dataset pipeline mode >>> numpy_slices_dataset = ds.NumpySlicesDataset(data=['Hello world'], column_names=["text"]) >>> # The paths to sentence_piece_vocab_file can be downloaded directly from the mindspore repository. Refer to - >>> # https://gitee.com/mindspore/mindspore/blob/r2.3.q1/tests/ut/data/dataset/test_sentencepiece/vocab.txt + >>> # https://gitee.com/mindspore/mindspore/blob/master/tests/ut/data/dataset/test_sentencepiece/vocab.txt >>> sentence_piece_vocab_file = "tests/ut/data/dataset/test_sentencepiece/vocab.txt" >>> vocab = text.SentencePieceVocab.from_file([sentence_piece_vocab_file], 512, 0.9995, ... SentencePieceModel.UNIGRAM, {}) @@ -607,7 +607,7 @@ class SentencePieceTokenizer(TextTensorOperation): Tutorial Examples: - `Illustration of text transforms - `_ + `_ """ @check_sentence_piece_tokenizer @@ -669,7 +669,7 @@ class SlidingWindow(TextTensorOperation): Tutorial Examples: - `Illustration of text transforms - `_ + `_ """ @check_slidingwindow @@ -722,7 +722,7 @@ class ToNumber(TextTensorOperation): Tutorial Examples: - `Illustration of text transforms - `_ + `_ """ @check_to_number @@ -764,7 +764,7 @@ class ToVectors(TextTensorOperation): >>> numpy_slices_dataset = ds.NumpySlicesDataset(data=["happy", "birthday", "to", "you"], column_names=["text"]) >>> # Load vectors from file >>> # The paths to vectors_file can be downloaded directly from the mindspore repository. Refer to - >>> # https://gitee.com/mindspore/mindspore/blob/r2.3.q1/tests/ut/data/dataset/testVectors/vectors.txt + >>> # https://gitee.com/mindspore/mindspore/blob/master/tests/ut/data/dataset/testVectors/vectors.txt >>> vectors_file = "tests/ut/data/dataset/testVectors/vectors.txt" >>> vectors = text.Vectors.from_file(vectors_file) >>> # Use ToVectors operation to map tokens to vectors @@ -783,7 +783,7 @@ class ToVectors(TextTensorOperation): Tutorial Examples: - `Illustration of text transforms - `_ + `_ """ @check_to_vectors @@ -843,7 +843,7 @@ class Truncate(TextTensorOperation): Tutorial Examples: - `Illustration of text transforms - `_ + `_ """ @check_truncate @@ -901,7 +901,7 @@ class TruncateSequencePair(TextTensorOperation): Tutorial Examples: - `Illustration of text transforms - `_ + `_ """ @check_pair_truncate @@ -965,7 +965,7 @@ class UnicodeCharTokenizer(TextTensorOperation): Tutorial Examples: - `Illustration of text transforms - `_ + `_ """ @check_with_offsets @@ -1047,7 +1047,7 @@ class WordpieceTokenizer(TextTensorOperation): Tutorial Examples: - `Illustration of text transforms - `_ + `_ """ @check_wordpiece_tokenizer @@ -1149,7 +1149,7 @@ if platform.system().lower() != 'windows': Tutorial Examples: - `Illustration of text transforms - `_ + `_ """ @check_basic_tokenizer @@ -1264,7 +1264,7 @@ if platform.system().lower() != 'windows': Tutorial Examples: - `Illustration of text transforms - `_ + `_ """ @check_bert_tokenizer @@ -1323,7 +1323,7 @@ if platform.system().lower() != 'windows': Tutorial Examples: - `Illustration of text transforms - `_ + `_ """ def parse(self): @@ -1363,7 +1363,7 @@ if platform.system().lower() != 'windows': Tutorial Examples: - `Illustration of text transforms - `_ + `_ """ def parse(self): @@ -1411,7 +1411,7 @@ if platform.system().lower() != 'windows': Tutorial Examples: - `Illustration of text transforms - `_ + `_ """ def __init__(self, normalize_form=NormalizeForm.NFKC): @@ -1470,7 +1470,7 @@ if platform.system().lower() != 'windows': Tutorial Examples: - `Illustration of text transforms - `_ + `_ """ @check_regex_replace @@ -1549,7 +1549,7 @@ if platform.system().lower() != 'windows': Tutorial Examples: - `Illustration of text transforms - `_ + `_ """ @check_regex_tokenizer @@ -1622,7 +1622,7 @@ if platform.system().lower() != 'windows': Tutorial Examples: - `Illustration of text transforms - `_ + `_ """ @@ -1691,7 +1691,7 @@ if platform.system().lower() != 'windows': Tutorial Examples: - `Illustration of text transforms - `_ + `_ """ @check_with_offsets diff --git a/mindspore/python/mindspore/dataset/transforms/__init__.py b/mindspore/python/mindspore/dataset/transforms/__init__.py index 8f3512dc0d7..fd601e6d63e 100644 --- a/mindspore/python/mindspore/dataset/transforms/__init__.py +++ b/mindspore/python/mindspore/dataset/transforms/__init__.py @@ -31,7 +31,7 @@ Note: Legacy c_transforms and py_transforms are deprecated but can still be impo from mindspore.dataset.transforms import py_transforms See `Common Transforms -`_ tutorial for more details. +`_ tutorial for more details. Descriptions of common data processing terms are as follows: diff --git a/mindspore/python/mindspore/dataset/vision/__init__.py b/mindspore/python/mindspore/dataset/vision/__init__.py index 73e1cfba4b5..f89bf5e93ca 100755 --- a/mindspore/python/mindspore/dataset/vision/__init__.py +++ b/mindspore/python/mindspore/dataset/vision/__init__.py @@ -32,7 +32,7 @@ Note: Legacy c_transforms and py_transforms are deprecated but can still be impo import mindspore.dataset.vision.py_transforms as py_vision See `Vision Transforms -`_ tutorial for more details. +`_ tutorial for more details. Descriptions of common data processing terms are as follows: @@ -43,10 +43,10 @@ Descriptions of common data processing terms are as follows: The data transform operation can be executed in the data processing pipeline or in the eager mode: - Pipeline mode is generally used to process big datasets. Examples refer to - `introduction to data processing pipeline `_ . - Eager mode is more like a function call to process data. Examples refer to - `Lightweight Data Processing `_ . + `Lightweight Data Processing `_ . """ from . import c_transforms from . import py_transforms diff --git a/mindspore/python/mindspore/dataset/vision/transforms.py b/mindspore/python/mindspore/dataset/vision/transforms.py index b4662a49a10..e53306b39ad 100644 --- a/mindspore/python/mindspore/dataset/vision/transforms.py +++ b/mindspore/python/mindspore/dataset/vision/transforms.py @@ -144,7 +144,7 @@ class AdjustBrightness(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_adjust_brightness @@ -194,7 +194,7 @@ class AdjustBrightness(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ self.device_target = device_target return self @@ -257,7 +257,7 @@ class AdjustContrast(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_adjust_contrast @@ -306,7 +306,7 @@ class AdjustContrast(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ self.device_target = device_target return self @@ -377,7 +377,7 @@ class AdjustGamma(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_adjust_gamma @@ -444,7 +444,7 @@ class AdjustHue(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_adjust_hue @@ -493,7 +493,7 @@ class AdjustHue(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ self.device_target = device_target return self @@ -557,7 +557,7 @@ class AdjustSaturation(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_adjust_saturation @@ -606,7 +606,7 @@ class AdjustSaturation(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ self.device_target = device_target return self @@ -668,7 +668,7 @@ class AdjustSharpness(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_adjust_sharpness @@ -740,7 +740,7 @@ class Affine(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_affine @@ -805,7 +805,7 @@ class Affine(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ self.device_target = device_target return self @@ -875,7 +875,7 @@ class AutoAugment(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_auto_augment @@ -937,7 +937,7 @@ class AutoContrast(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_auto_contrast @@ -1024,7 +1024,7 @@ class BoundingBoxAugment(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_bounding_box_augment_cpp @@ -1095,7 +1095,7 @@ class CenterCrop(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_center_crop @@ -1209,7 +1209,7 @@ class ConvertColor(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_convert_color @@ -1270,7 +1270,7 @@ class Crop(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_crop @@ -1324,7 +1324,7 @@ class Crop(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ self.device_target = device_target return self @@ -1394,7 +1394,7 @@ class CutMixBatch(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_cut_mix_batch_c @@ -1453,7 +1453,7 @@ class CutOut(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_cutout_new @@ -1536,7 +1536,7 @@ class Decode(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_decode @@ -1629,7 +1629,7 @@ class Decode(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ if self.implementation == Implementation.PY and device_target == "Ascend": raise ValueError("The transform \"Decode(to_pil=True)\" cannot be performed on Ascend device, " + @@ -1687,7 +1687,7 @@ class Equalize(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ def __init__(self): @@ -1765,7 +1765,7 @@ class Erase(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_erase @@ -1853,7 +1853,7 @@ class FiveCrop(PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_five_crop @@ -1929,7 +1929,7 @@ class GaussianBlur(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_gaussian_blur @@ -1988,7 +1988,7 @@ class GaussianBlur(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ self.device_target = device_target if device_target == "Ascend": @@ -2067,7 +2067,7 @@ class Grayscale(PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_num_channels @@ -2123,7 +2123,7 @@ class HorizontalFlip(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ def __init__(self): @@ -2172,7 +2172,7 @@ class HorizontalFlip(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ self.device_target = device_target return self @@ -2222,7 +2222,7 @@ class HsvToRgb(PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_hsv_to_rgb @@ -2285,7 +2285,7 @@ class HWC2CHW(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ def __init__(self): @@ -2333,7 +2333,7 @@ class Invert(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ def __init__(self): @@ -2409,7 +2409,7 @@ class LinearTransformation(PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_linear_transform @@ -2497,7 +2497,7 @@ class MixUp(PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_mix_up @@ -2592,7 +2592,7 @@ class MixUpBatch(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_mix_up_batch_c @@ -2659,7 +2659,7 @@ class Normalize(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_normalize @@ -2718,7 +2718,7 @@ class Normalize(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ self.device_target = device_target return self @@ -2856,7 +2856,7 @@ class Pad(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_pad @@ -2912,7 +2912,7 @@ class Pad(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ self.device_target = device_target return self @@ -2993,7 +2993,7 @@ class PadToSize(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_pad_to_size @@ -3064,7 +3064,7 @@ class Perspective(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_perspective @@ -3126,7 +3126,7 @@ class Perspective(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ self.device_target = device_target return self @@ -3188,7 +3188,7 @@ class Posterize(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_posterize @@ -3261,7 +3261,7 @@ class RandAugment(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_rand_augment @@ -3325,7 +3325,7 @@ class RandomAdjustSharpness(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_random_adjust_sharpness @@ -3419,7 +3419,7 @@ class RandomAffine(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_random_affine @@ -3538,7 +3538,7 @@ class RandomAutoContrast(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_random_auto_contrast @@ -3598,7 +3598,7 @@ class RandomColor(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_positive_degrees @@ -3688,7 +3688,7 @@ class RandomColorAdjust(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_random_color_adjust @@ -3809,7 +3809,7 @@ class RandomCrop(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_random_crop @@ -3934,7 +3934,7 @@ class RandomCropDecodeResize(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_random_resize_crop @@ -4048,7 +4048,7 @@ class RandomCropWithBBox(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_random_crop @@ -4116,7 +4116,7 @@ class RandomEqualize(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_prob @@ -4194,7 +4194,7 @@ class RandomErasing(PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_random_erasing @@ -4292,7 +4292,7 @@ class RandomGrayscale(PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_prob @@ -4362,7 +4362,7 @@ class RandomHorizontalFlip(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_prob @@ -4436,7 +4436,7 @@ class RandomHorizontalFlipWithBBox(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_prob @@ -4488,7 +4488,7 @@ class RandomInvert(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_prob @@ -4540,7 +4540,7 @@ class RandomLighting(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_alpha @@ -4637,7 +4637,7 @@ class RandomPerspective(PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_random_perspective @@ -4712,7 +4712,7 @@ class RandomPosterize(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_random_posterize @@ -4789,7 +4789,7 @@ class RandomResizedCrop(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_random_resize_crop @@ -4898,7 +4898,7 @@ class RandomResizedCropWithBBox(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_random_resize_crop @@ -4969,7 +4969,7 @@ class RandomResize(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_resize @@ -5054,7 +5054,7 @@ class RandomResizeWithBBox(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_resize @@ -5130,7 +5130,7 @@ class RandomRotation(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_random_rotation @@ -5229,7 +5229,7 @@ class RandomSelectSubpolicy(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_random_select_subpolicy_op @@ -5292,7 +5292,7 @@ class RandomSharpness(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_positive_degrees @@ -5359,7 +5359,7 @@ class RandomSolarize(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_random_solarize @@ -5411,7 +5411,7 @@ class RandomVerticalFlip(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_prob @@ -5484,7 +5484,7 @@ class RandomVerticalFlipWithBBox(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_prob @@ -5540,7 +5540,7 @@ class Rescale(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_rescale @@ -5602,7 +5602,7 @@ class Resize(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_resize_interpolation @@ -5662,7 +5662,7 @@ class Resize(ImageTensorOperation, PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ self.device_target = device_target if self.interpolation == Inter.ANTIALIAS and self.device_target == "Ascend": @@ -5747,7 +5747,7 @@ class ResizedCrop(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_resized_crop @@ -5807,7 +5807,7 @@ class ResizedCrop(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ self.device_target = device_target return self @@ -5873,7 +5873,7 @@ class ResizeWithBBox(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_resize_interpolation @@ -5931,7 +5931,7 @@ class RgbToHsv(PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_rgb_to_hsv @@ -6009,7 +6009,7 @@ class Rotate(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_rotate @@ -6094,7 +6094,7 @@ class SlicePatches(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_slice_patches @@ -6151,7 +6151,7 @@ class Solarize(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_solarize @@ -6237,7 +6237,7 @@ class TenCrop(PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_ten_crop @@ -6299,7 +6299,7 @@ class ToNumpy(PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ def __init__(self): @@ -6359,7 +6359,7 @@ class ToPIL(PyTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ def __init__(self): @@ -6423,7 +6423,7 @@ class ToTensor(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_to_tensor @@ -6489,7 +6489,7 @@ class ToType(TypeCast): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @@ -6549,7 +6549,7 @@ class TrivialAugmentWide(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_trivial_augment_wide @@ -6624,7 +6624,7 @@ class UniformAugment(CompoundOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ @check_uniform_augment @@ -6683,7 +6683,7 @@ class VerticalFlip(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ def __init__(self): @@ -6732,7 +6732,7 @@ class VerticalFlip(ImageTensorOperation): Tutorial Examples: - `Illustration of vision transforms - `_ + `_ """ self.device_target = device_target return self diff --git a/mindspore/python/mindspore/dataset/vision/utils.py b/mindspore/python/mindspore/dataset/vision/utils.py index bbd9c040571..78743b68f4b 100755 --- a/mindspore/python/mindspore/dataset/vision/utils.py +++ b/mindspore/python/mindspore/dataset/vision/utils.py @@ -262,7 +262,7 @@ class ConvertMode(IntEnum): mode = c_values.get(mode) if mode is None: - raise RuntimeError("Unsupported ConvertMode, see https://www.mindspore.cn/docs/zh-CN/r2.3.q1/api_python/" + raise RuntimeError("Unsupported ConvertMode, see https://www.mindspore.cn/docs/zh-CN/master/api_python/" "dataset_vision/mindspore.dataset.vision.ConvertColor.html for more details.") return mode diff --git a/mindspore/python/mindspore/experimental/optim/adadelta.py b/mindspore/python/mindspore/experimental/optim/adadelta.py index 19d25e05d1f..042db568de6 100644 --- a/mindspore/python/mindspore/experimental/optim/adadelta.py +++ b/mindspore/python/mindspore/experimental/optim/adadelta.py @@ -63,7 +63,7 @@ class Adadelta(Optimizer): .. warning:: This is an experimental optimizer API that is subject to change. This module must be used with lr scheduler module in `LRScheduler Class - `_ . + `_ . Args: params (Union[list(Parameter), list(dict)]): list of parameters to optimize or dicts defining @@ -97,7 +97,7 @@ class Adadelta(Optimizer): >>> from mindspore import nn >>> from mindspore.experimental import optim >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True) >>> optimizer = optim.Adadelta(net.trainable_params(), lr=0.1) diff --git a/mindspore/python/mindspore/experimental/optim/adagrad.py b/mindspore/python/mindspore/experimental/optim/adagrad.py index 1fb480eb4da..b75c72e4e6c 100644 --- a/mindspore/python/mindspore/experimental/optim/adagrad.py +++ b/mindspore/python/mindspore/experimental/optim/adagrad.py @@ -60,7 +60,7 @@ class Adagrad(Optimizer): .. warning:: This is an experimental optimizer API that is subject to change. This module must be used with lr scheduler module in `LRScheduler Class - `_ . + `_ . Args: params (Union[list(Parameter), list(dict)]): list of parameters to optimize or dicts defining @@ -95,7 +95,7 @@ class Adagrad(Optimizer): >>> from mindspore import nn >>> from mindspore.experimental import optim >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True) >>> optimizer = optim.Adagrad(net.trainable_params(), lr=0.1) diff --git a/mindspore/python/mindspore/experimental/optim/adam.py b/mindspore/python/mindspore/experimental/optim/adam.py index 5c5a6140399..9ad8671148a 100644 --- a/mindspore/python/mindspore/experimental/optim/adam.py +++ b/mindspore/python/mindspore/experimental/optim/adam.py @@ -80,7 +80,7 @@ class Adam(Optimizer): .. warning:: This is an experimental optimizer API that is subject to change. This module must be used with lr scheduler module in `LRScheduler Class - `_ . + `_ . Args: params (Union[list(Parameter), list(dict)]): list of parameters to optimize or dicts defining @@ -115,7 +115,7 @@ class Adam(Optimizer): >>> from mindspore import nn >>> from mindspore.experimental import optim >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True) >>> optimizer = optim.Adam(net.trainable_params(), lr=0.1) diff --git a/mindspore/python/mindspore/experimental/optim/adamax.py b/mindspore/python/mindspore/experimental/optim/adamax.py index 3f9820d0265..55108d758a7 100644 --- a/mindspore/python/mindspore/experimental/optim/adamax.py +++ b/mindspore/python/mindspore/experimental/optim/adamax.py @@ -66,7 +66,7 @@ class Adamax(Optimizer): .. warning:: This is an experimental optimizer API that is subject to change. This module must be used with lr scheduler module in `LRScheduler Class - `_ . + `_ . Args: params (Union[list(Parameter), list(dict)]): list of parameters to optimize or dicts defining @@ -100,7 +100,7 @@ class Adamax(Optimizer): >>> from mindspore import nn >>> from mindspore.experimental import optim >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True) >>> optimizer = optim.Adamax(net.trainable_params(), lr=0.1) diff --git a/mindspore/python/mindspore/experimental/optim/adamw.py b/mindspore/python/mindspore/experimental/optim/adamw.py index 4067d12387e..861b0105f61 100644 --- a/mindspore/python/mindspore/experimental/optim/adamw.py +++ b/mindspore/python/mindspore/experimental/optim/adamw.py @@ -101,7 +101,7 @@ class AdamW(Optimizer): .. warning:: This is an experimental optimizer API that is subject to change. This module must be used with lr scheduler module in `LRScheduler Class - `_ . + `_ . Args: params (Union[list(Parameter), list(dict)]): list of parameters to optimize or dicts defining @@ -136,7 +136,7 @@ class AdamW(Optimizer): >>> from mindspore import nn >>> from mindspore.experimental import optim >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True) >>> optimizer = optim.AdamW(net.trainable_params(), lr=0.1) diff --git a/mindspore/python/mindspore/experimental/optim/asgd.py b/mindspore/python/mindspore/experimental/optim/asgd.py index 243b88ae0aa..2e937833d79 100644 --- a/mindspore/python/mindspore/experimental/optim/asgd.py +++ b/mindspore/python/mindspore/experimental/optim/asgd.py @@ -56,7 +56,7 @@ class ASGD(Optimizer): .. warning:: This is an experimental optimizer API that is subject to change. This module must be used with lr scheduler module in `LRScheduler Class - `_ . + `_ . Args: params (Union[list(Parameter), list(dict)]): list of parameters to optimize or dicts defining @@ -85,7 +85,7 @@ class ASGD(Optimizer): >>> from mindspore import nn >>> from mindspore.experimental import optim >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True) >>> optimizer = optim.ASGD(net.trainable_params(), lr=0.1) diff --git a/mindspore/python/mindspore/experimental/optim/lr_scheduler.py b/mindspore/python/mindspore/experimental/optim/lr_scheduler.py index 83fece6abc9..b236b9fb891 100644 --- a/mindspore/python/mindspore/experimental/optim/lr_scheduler.py +++ b/mindspore/python/mindspore/experimental/optim/lr_scheduler.py @@ -38,7 +38,7 @@ class LRScheduler: .. warning:: This is an experimental lr scheduler module that is subject to change. This module must be used with optimizers in `Experimental Optimizer - `_ . + `_ . Args: optimizer (:class:`mindspore.experimental.optim.Optimizer`): The optimizer instance. @@ -149,7 +149,7 @@ class StepLR(LRScheduler): .. warning:: This is an experimental lr scheduler module that is subject to change. This module must be used with optimizers in `Experimental Optimizer - `_ . + `_ . Args: optimizer (:class:`mindspore.experimental.optim.Optimizer`): Wrapped optimizer. @@ -166,7 +166,7 @@ class StepLR(LRScheduler): >>> from mindspore import nn >>> from mindspore.experimental import optim >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True) >>> optimizer = optim.Adam(net.trainable_params(), lr=0.05) @@ -186,7 +186,7 @@ class StepLR(LRScheduler): ... return loss >>> for epoch in range(6): ... # Create the dataset taking MNIST as an example. Refer to - ... # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/mnist.py + ... # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/mnist.py ... for data, label in create_dataset(): ... train_step(data, label) ... scheduler.step() @@ -221,7 +221,7 @@ class LinearLR(LRScheduler): .. warning:: This is an experimental lr scheduler module that is subject to change. This module must be used with optimizers in `Experimental Optimizer - `_ . + `_ . Args: optimizer (:class:`mindspore.experimental.optim.Optimizer`): Wrapped optimizer. @@ -246,7 +246,7 @@ class LinearLR(LRScheduler): >>> from mindspore import nn >>> from mindspore.experimental import optim >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True) >>> optimizer = optim.Adam(net.trainable_params(), lr=0.05) @@ -268,7 +268,7 @@ class LinearLR(LRScheduler): ... return loss >>> for epoch in range(5): ... # Create the dataset taking MNIST as an example. Refer to - ... # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/mnist.py + ... # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/mnist.py ... for data, label in create_dataset(): ... train_step(data, label) ... scheduler.step() @@ -316,7 +316,7 @@ class ExponentialLR(LRScheduler): .. warning:: This is an experimental lr scheduler module that is subject to change. This module must be used with optimizers in `Experimental Optimizer - `_ . + `_ . Args: optimizer (:class:`mindspore.experimental.optim.Optimizer`): Wrapped optimizer. @@ -383,7 +383,7 @@ class PolynomialLR(LRScheduler): .. warning:: This is an experimental lr scheduler module that is subject to change. This module must be used with optimizers in `Experimental Optimizer - `_ . + `_ . Args: optimizer (:class:`mindspore.experimental.optim.Optimizer`): Wrapped optimizer. @@ -450,7 +450,7 @@ class LambdaLR(LRScheduler): .. warning:: This is an experimental lr scheduler module that is subject to change. This module must be used with optimizers in `Experimental Optimizer - `_ . + `_ . Args: optimizer (:class:`mindspore.experimental.optim.Optimizer`): Wrapped optimizer. @@ -503,7 +503,7 @@ class MultiplicativeLR(LRScheduler): .. warning:: This is an experimental lr scheduler module that is subject to change. This module must be used with optimizers in `Experimental Optimizer - `_ . + `_ . Args: optimizer (:class:`mindspore.experimental.optim.Optimizer`): Wrapped optimizer. @@ -557,7 +557,7 @@ class MultiStepLR(LRScheduler): .. warning:: This is an experimental lr scheduler module that is subject to change. This module must be used with optimizers in `Experimental Optimizer - `_ . + `_ . Args: optimizer (:class:`mindspore.experimental.optim.Optimizer`): Wrapped optimizer. @@ -633,7 +633,7 @@ class ConstantLR(LRScheduler): .. warning:: This is an experimental lr scheduler module that is subject to change. This module must be used with optimizers in `Experimental Optimizer - `_ . + `_ . Args: optimizer (:class:`mindspore.experimental.optim.Optimizer`): Wrapped optimizer. @@ -695,7 +695,7 @@ class SequentialLR: .. warning:: This is an experimental lr scheduler module that is subject to change. This module must be used with optimizers in `Experimental Optimizer - `_ . + `_ . Args: optimizer (:class:`mindspore.experimental.optim.Optimizer`): Wrapped optimizer. @@ -799,7 +799,7 @@ class ReduceLROnPlateau: .. warning:: This is an experimental lr scheduler module that is subject to change. This module must be used with optimizers in `Experimental Optimizer - `_ . + `_ . Args: optimizer (:class:`mindspore.experimental.optim.Optimizer`): Wrapped optimizer. @@ -994,7 +994,7 @@ class CyclicLR(LRScheduler): .. warning:: This is an experimental lr scheduler module that is subject to change. This module must be used with optimizers in `Experimental Optimizer - `_ . + `_ . Args: optimizer (:class:`mindspore.experimental.optim.Optimizer`): Wrapped optimizer. @@ -1171,7 +1171,7 @@ class CosineAnnealingWarmRestarts(LRScheduler): .. warning:: This is an experimental lr scheduler module that is subject to change. This module must be used with optimizers in `Experimental Optimizer - `_ . + `_ . Args: optimizer (:class:`mindspore.experimental.optim.Optimizer`): Wrapped optimizer. @@ -1303,7 +1303,7 @@ class CosineAnnealingLR(LRScheduler): .. warning:: This is an experimental lr scheduler module that is subject to change. This module must be used with optimizers in `Experimental Optimizer - `_ . + `_ . Args: optimizer (:class:`mindspore.experimental.optim.Optimizer`): Wrapped optimizer. diff --git a/mindspore/python/mindspore/experimental/optim/nadam.py b/mindspore/python/mindspore/experimental/optim/nadam.py index f01baa56185..77b99ee2eba 100644 --- a/mindspore/python/mindspore/experimental/optim/nadam.py +++ b/mindspore/python/mindspore/experimental/optim/nadam.py @@ -57,7 +57,7 @@ class NAdam(Optimizer): .. warning:: This is an experimental optimizer API that is subject to change. This module must be used with lr scheduler module in `LRScheduler Class - `_ . + `_ . Args: params (Union[list(Parameter), list(dict)]): list of parameters to optimize or dicts defining @@ -89,7 +89,7 @@ class NAdam(Optimizer): >>> from mindspore import nn >>> from mindspore.experimental import optim >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True) >>> optimizer = optim.NAdam(net.trainable_params(), lr=0.1) diff --git a/mindspore/python/mindspore/experimental/optim/optimizer.py b/mindspore/python/mindspore/experimental/optim/optimizer.py index 0cb13e83504..43f928e3d87 100644 --- a/mindspore/python/mindspore/experimental/optim/optimizer.py +++ b/mindspore/python/mindspore/experimental/optim/optimizer.py @@ -36,7 +36,7 @@ class Optimizer(Cell): .. warning:: This is an experimental optimizer API that is subject to change. This module must be used with lr scheduler module in `LRScheduler Class - `_ . + `_ . Args: params (Union[list(Parameter), list(dict)]): an iterable of :class:`mindspore.Parameter` or diff --git a/mindspore/python/mindspore/experimental/optim/radam.py b/mindspore/python/mindspore/experimental/optim/radam.py index 6986b26b96e..78a4f26890c 100644 --- a/mindspore/python/mindspore/experimental/optim/radam.py +++ b/mindspore/python/mindspore/experimental/optim/radam.py @@ -89,7 +89,7 @@ class RAdam(Optimizer): .. warning:: This is an experimental optimizer API that is subject to change. This module must be used with lr scheduler module in `LRScheduler Class - `_ . + `_ . Args: params (Union[list(Parameter), list(dict)]): list of parameters to optimize or dicts defining @@ -119,7 +119,7 @@ class RAdam(Optimizer): >>> from mindspore import nn >>> from mindspore.experimental import optim >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True) >>> optimizer = optim.RAdam(net.trainable_params(), lr=0.1) diff --git a/mindspore/python/mindspore/experimental/optim/rmsprop.py b/mindspore/python/mindspore/experimental/optim/rmsprop.py index a0cfd8b2da8..4d17cabcb4d 100644 --- a/mindspore/python/mindspore/experimental/optim/rmsprop.py +++ b/mindspore/python/mindspore/experimental/optim/rmsprop.py @@ -53,7 +53,7 @@ class RMSprop(Optimizer): .. warning:: This is an experimental optimizer API that is subject to change. This module must be used with lr scheduler module in `LRScheduler Class - `_ . + `_ . Args: params (Union[list(Parameter), list(dict)]): list of parameters to optimize or dicts defining @@ -87,7 +87,7 @@ class RMSprop(Optimizer): >>> from mindspore import nn >>> from mindspore.experimental import optim >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True) >>> optimizer = optim.RMSprop(net.trainable_params(), lr=0.1) diff --git a/mindspore/python/mindspore/experimental/optim/rprop.py b/mindspore/python/mindspore/experimental/optim/rprop.py index 091ff898758..bb0cb911a0b 100644 --- a/mindspore/python/mindspore/experimental/optim/rprop.py +++ b/mindspore/python/mindspore/experimental/optim/rprop.py @@ -68,7 +68,7 @@ class Rprop(Optimizer): .. warning:: This is an experimental optimizer API that is subject to change. This module must be used with lr scheduler module in `LRScheduler Class - `_ . + `_ . Args: params (Union[list(Parameter), list(dict)]): list of parameters to optimize or dicts defining @@ -100,7 +100,7 @@ class Rprop(Optimizer): >>> from mindspore import nn >>> from mindspore.experimental import optim >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True) >>> optimizer = optim.Rprop(net.trainable_params(), lr=0.1) diff --git a/mindspore/python/mindspore/experimental/optim/sgd.py b/mindspore/python/mindspore/experimental/optim/sgd.py index 4ba1786f909..0fd6041aa69 100644 --- a/mindspore/python/mindspore/experimental/optim/sgd.py +++ b/mindspore/python/mindspore/experimental/optim/sgd.py @@ -56,7 +56,7 @@ class SGD(Optimizer): .. warning:: This is an experimental optimizer API that is subject to change. This module must be used with lr scheduler module in `LRScheduler Class - `_ . + `_ . Args: params (Union[list(Parameter), list(dict)]): list of parameters to optimize or dicts defining @@ -90,7 +90,7 @@ class SGD(Optimizer): >>> from mindspore import nn >>> from mindspore.experimental import optim >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True) >>> optimizer = optim.SGD(net.trainable_params(), lr=0.1) diff --git a/mindspore/python/mindspore/hypercomplex/double/double_operators.py b/mindspore/python/mindspore/hypercomplex/double/double_operators.py index 6a122255aa4..1e335f3c008 100644 --- a/mindspore/python/mindspore/hypercomplex/double/double_operators.py +++ b/mindspore/python/mindspore/hypercomplex/double/double_operators.py @@ -1297,7 +1297,7 @@ class ReLU(nn.Cell): Inputs: - **inp** (Tensor) - The input of ReLU is a Tensor of shape (2, *, ..., *). The data type is - `number `_ . + `number `_ . Outputs: Tensor, with the same type and shape as the `inp`. diff --git a/mindspore/python/mindspore/hypercomplex/double/double_relu.py b/mindspore/python/mindspore/hypercomplex/double/double_relu.py index 541db8b2bfe..b4ee238c76d 100644 --- a/mindspore/python/mindspore/hypercomplex/double/double_relu.py +++ b/mindspore/python/mindspore/hypercomplex/double/double_relu.py @@ -49,7 +49,7 @@ class J1J2ReLU(nn.Cell): Inputs: - **inp** (Tensor) - The input of ReLU is a Tensor of shape (2, *, ..., *). The data type is - `number `_ . + `number `_ . Outputs: Tensor, with the same type and shape as the `inp`. diff --git a/mindspore/python/mindspore/nn/cell.py b/mindspore/python/mindspore/nn/cell.py index ecae9484df6..c4efc1032c5 100755 --- a/mindspore/python/mindspore/nn/cell.py +++ b/mindspore/python/mindspore/nn/cell.py @@ -217,7 +217,7 @@ class Cell(Cell_): Tutorial Examples: - `Cell and Parameter - Custom Cell Reverse - `_ + `_ """ return self._bprop_debug @@ -1335,7 +1335,7 @@ class Cell(Cell_): Tutorial Examples: - `Model Training - Optimizer - `_ + `_ """ return list(filter(lambda x: x.requires_grad, self.get_parameters(expand=recurse))) @@ -1446,7 +1446,7 @@ class Cell(Cell_): Tutorial Examples: - `Building a Network - Model Parameters - `_ + `_ """ cells = [] if expand: @@ -1785,7 +1785,7 @@ class Cell(Cell_): accelerate the algorithm in the algorithm library. If `boost_type` is not in the algorithm library, please view the algorithm in the algorithm library through - `algorithm library `_. + `algorithm library `_. Note: Some acceleration algorithms may affect the accuracy of the network, please choose carefully. @@ -1842,7 +1842,7 @@ class Cell(Cell_): Tutorial Examples: - `Model Training - Implementing Training and Evaluation - `_ + `_ """ if mode: self._phase = 'train' diff --git a/mindspore/python/mindspore/nn/layer/container.py b/mindspore/python/mindspore/nn/layer/container.py index e216ccd7ce9..49597a1bb6f 100644 --- a/mindspore/python/mindspore/nn/layer/container.py +++ b/mindspore/python/mindspore/nn/layer/container.py @@ -123,7 +123,7 @@ class _CellListBase: class SequentialCell(Cell): """ Sequential Cell container. For more details about Cell, please refer to - `Cell `_. + `Cell `_. A list of Cells will be added to it in the order they are passed in the constructor. Alternatively, an ordered dict of cells can also be passed in. @@ -325,7 +325,7 @@ class SequentialCell(Cell): class CellList(_CellListBase, Cell): """ Holds Cells in a list. For more details about Cell, please refer to - `Cell `_. + `Cell `_. CellList can be used like a regular Python list, the Cells it contains have been initialized and the types of Cells it contains can not be CellDict. diff --git a/mindspore/python/mindspore/nn/layer/conv.py b/mindspore/python/mindspore/nn/layer/conv.py index 34973269467..b8b5ceaa87c 100644 --- a/mindspore/python/mindspore/nn/layer/conv.py +++ b/mindspore/python/mindspore/nn/layer/conv.py @@ -242,11 +242,11 @@ class Conv2d(_Conv): distributions as well as constant ``'One'`` and ``'Zero'`` distributions are possible. Alias ``'xavier_uniform'`` , ``'he_uniform'`` , ``'ones'`` and ``'zeros'`` are acceptable. Uppercase and lowercase are both acceptable. Refer to the values of - `Initializer `_, + `Initializer `_, for more details. Default: ``None`` , weight will be initialized using ``'HeUniform'``. bias_init (Union[Tensor, str, Initializer, numbers.Number], optional): Initialization method of bias parameter. Available initialization methods are the same as 'weight_init'. Refer to the values of - `Initializer `_, + `Initializer `_, for more details. Default: ``None`` , bias will be initialized using ``'Uniform'`` . data_format (str, optional): The optional value for data format, is ``'NHWC'`` or ``'NCHW'`` . Default: ``'NCHW'`` . (NHWC is only supported in GPU now.) @@ -458,11 +458,11 @@ class Conv1d(_Conv): distributions as well as constant 'One' and 'Zero' distributions are possible. Alias ``'xavier_uniform'`` , ``'he_uniform'`` , ``'ones'`` and ``'zeros'`` are acceptable. Uppercase and lowercase are both acceptable. Refer to the values of - `Initializer `_, + `Initializer `_, for more details. Default: ``None`` , weight will be initialized using ``'HeUniform'``. bias_init (Union[Tensor, str, Initializer, numbers.Number], optional): Initialization method of bias parameter. Available initialization methods are the same as 'weight_init'. Refer to the values of - `Initializer `_, + `Initializer `_, for more details. Default: ``None`` , bias will be initialized using ``'Uniform'``. dtype (:class:`mindspore.dtype`): Dtype of Parameters. Default: ``mstype.float32`` . @@ -691,11 +691,11 @@ class Conv3d(_Conv): distributions as well as constant ``'One'`` and ``'Zero'`` distributions are possible. Alias ``'xavier_uniform'`` , ``'he_uniform'`` , ``'ones'`` and ``'zeros'`` are acceptable. Uppercase and lowercase are both acceptable. Refer to the values of - `Initializer `_, + `Initializer `_, for more details. Default: ``None`` , weight will be initialized using ``'HeUniform'``. bias_init (Union[Tensor, str, Initializer, numbers.Number], optional): Initialization method of bias parameter. Available initialization methods are the same as 'weight_init'. Refer to the values of - `Initializer `_, + `Initializer `_, for more details. Default: ``None`` , bias will be initialized using ``'Uniform'`` . data_format (str, optional): The optional value for data format. Currently only support ``'NCDHW'`` . dtype (:class:`mindspore.dtype`): Dtype of Parameters. Default: ``mstype.float32`` . diff --git a/mindspore/python/mindspore/nn/layer/embedding.py b/mindspore/python/mindspore/nn/layer/embedding.py index dfd4e0bec02..6de0f5c08ec 100755 --- a/mindspore/python/mindspore/nn/layer/embedding.py +++ b/mindspore/python/mindspore/nn/layer/embedding.py @@ -66,7 +66,7 @@ class Embedding(Cell): use_one_hot (bool): Specifies whether to apply one_hot encoding form. Default: ``False`` . embedding_table (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the embedding_table. Refer to class `mindspore.common.initializer - `_ + `_ for the values of string when a string is specified. Default: ``'normal'`` . dtype (:class:`mindspore.dtype`): Data type of `x`. Default: ``mstype.float32`` . padding_idx (int, None): When the padding_idx encounters index, the output embedding vector of this index diff --git a/mindspore/python/mindspore/nn/layer/normalization.py b/mindspore/python/mindspore/nn/layer/normalization.py index f629b1084dd..cd1b0fd8655 100644 --- a/mindspore/python/mindspore/nn/layer/normalization.py +++ b/mindspore/python/mindspore/nn/layer/normalization.py @@ -206,19 +206,19 @@ class BatchNorm1d(_BatchNorm): Default: ``True`` . gamma_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the :math:`\gamma` weight. The values of str refer to the function `mindspore.common.initializer - `_ + `_ including ``'zeros'`` , ``'ones'`` , etc. Default: ``'ones'`` . beta_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the :math:`\beta` weight. The values of str refer to the function `mindspore.common.initializer - `_ + `_ including ``'zeros'`` , ``'ones'``, etc. Default: ``'zeros'`` . moving_mean_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the moving mean. The values of str refer to the function `mindspore.common.initializer - `_ + `_ including ``'zeros'`` , ``'ones'`` , etc. Default: ``'zeros'`` . moving_var_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the moving variance. The values of str refer to the function `mindspore.common.initializer - `_ + `_ including ``'zeros'`` , ``'ones'`` , etc. Default: ``'ones'`` . use_batch_statistics (bool): If ``true`` , use the mean value and variance value of current batch data. If ``false`` , use the mean value and variance value of specified value. If ``None`` , the training process @@ -302,19 +302,19 @@ class BatchNorm2d(_BatchNorm): Default: ``True`` . gamma_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the :math:`\gamma` weight. The values of str refer to the function `mindspore.common.initializer - `_ + `_ including ``'zeros'`` , ``'ones'`` , etc. Default: ``'ones'`` . beta_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the :math:`\beta` weight. The values of str refer to the function `mindspore.common.initializer - `_ + `_ including ``'zeros'`` , ``'ones'`` , etc. Default: ``'zeros'`` . moving_mean_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the moving mean. The values of str refer to the function `mindspore.common.initializer - `_ + `_ including ``'zeros'`` , ``'ones'`` , etc. Default: ``'zeros'`` . moving_var_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the moving variance. The values of str refer to the function `mindspore.common.initializer - `_ + `_ including ``'zeros'`` , ``'ones'`` , etc. Default: ``'ones'`` . use_batch_statistics (bool): Default: ``None`` . @@ -391,19 +391,19 @@ class BatchNorm3d(Cell): affine (bool): A bool value. When set to ``True`` , gamma and beta can be learned. Default: ``True`` . gamma_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the gamma weight. The values of str refer to the function `mindspore.common.initializer - `_ + `_ including ``'zeros'`` , ``'ones'`` , etc. Default: ``'ones'`` . beta_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the beta weight. The values of str refer to the function `mindspore.common.initializer - `_ + `_ including ``'zeros'`` , ``'ones'`` , etc. Default: ``'zeros'`` . moving_mean_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the moving mean. The values of str refer to the function `mindspore.common.initializer - `_ + `_ including ``'zeros'`` , ``'ones'`` , etc. Default: ``'zeros'`` . moving_var_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the moving variance. The values of str refer to the function `mindspore.common.initializer - `_ + `_ including ``'zeros'`` , ``'ones'`` , etc. Default: ``'ones'`` . use_batch_statistics (bool): If true, use the mean value and variance value of current batch data. If ``false``, use the mean value and variance value of specified value. If ``None`` , the training process @@ -558,14 +558,14 @@ class SyncBatchNorm(_BatchNorm): For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the `Ascend tutorial - `_ + `_ for more details. For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup - `_ . + `_ . For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster - Startup `_ . + Startup `_ . This example should be run with multiple devices. diff --git a/mindspore/python/mindspore/nn/optim/ada_grad.py b/mindspore/python/mindspore/nn/optim/ada_grad.py index fc5b46475d1..a7e3cab129c 100644 --- a/mindspore/python/mindspore/nn/optim/ada_grad.py +++ b/mindspore/python/mindspore/nn/optim/ada_grad.py @@ -126,7 +126,7 @@ class Adagrad(Optimizer): - LearningRateSchedule: Learning rate is dynamic. During training, the optimizer calls the instance of `LearningRateSchedule - `_ + `_ with step as the input to get the learning rate of current step. update_slots (bool): Whether the :math:`h` will be updated. Default: ``True`` . @@ -168,7 +168,7 @@ class Adagrad(Optimizer): >>> import mindspore.nn as nn >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> #1) All parameters use the same learning rate and weight decay >>> optim = nn.Adagrad(params=net.trainable_params()) diff --git a/mindspore/python/mindspore/nn/optim/adadelta.py b/mindspore/python/mindspore/nn/optim/adadelta.py index 303f5a6a8d4..6175e987b0a 100644 --- a/mindspore/python/mindspore/nn/optim/adadelta.py +++ b/mindspore/python/mindspore/nn/optim/adadelta.py @@ -106,7 +106,7 @@ class Adadelta(Optimizer): - LearningRateSchedule: Learning rate is dynamic. During training, the optimizer calls the instance of `LearningRateSchedule - `_ + `_ with step as the input to get the learning rate of current step. rho (float): Decay rate, must be in range [0.0, 1.0]. Default: ``0.9`` . diff --git a/mindspore/python/mindspore/nn/optim/adafactor.py b/mindspore/python/mindspore/nn/optim/adafactor.py index 289ef7cc7fe..60066836f2c 100644 --- a/mindspore/python/mindspore/nn/optim/adafactor.py +++ b/mindspore/python/mindspore/nn/optim/adafactor.py @@ -264,7 +264,7 @@ class AdaFactor(Optimizer): >>> from mindspore import nn >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> #1) Parameters use the default learning rate with None and weight decay with 0. >>> optim = nn.AdaFactor(params=net.trainable_params()) diff --git a/mindspore/python/mindspore/nn/optim/adam.py b/mindspore/python/mindspore/nn/optim/adam.py index f4ea4ef3da8..cc46b9271f2 100755 --- a/mindspore/python/mindspore/nn/optim/adam.py +++ b/mindspore/python/mindspore/nn/optim/adam.py @@ -579,7 +579,7 @@ class Adam(Optimizer): - LearningRateSchedule: Learning rate is dynamic. During training, the optimizer calls the instance of `LearningRateSchedule - `_ + `_ with step as the input to get the learning rate of current step. beta1 (float): The exponential decay rate for the 1st moment estimations. Should be in range (0.0, 1.0). @@ -652,7 +652,7 @@ class Adam(Optimizer): >>> from mindspore import nn >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> #1) All parameters use the same learning rate and weight decay >>> optim = nn.Adam(params=net.trainable_params()) @@ -906,7 +906,7 @@ class AdamWeightDecay(Optimizer): There is usually no connection between a optimizer and mixed precision. But when `FixedLossScaleManager` is used and `drop_overflow_update` in `FixedLossScaleManager` is set to False, optimizer needs to set the 'loss_scale'. As this optimizer has no argument of `loss_scale`, so `loss_scale` needs to be processed by other means, refer - document `LossScale `_ to + document `LossScale `_ to process `loss_scale` correctly. If parameters are not grouped, the `weight_decay` in optimizer will be applied on the network parameters without @@ -950,7 +950,7 @@ class AdamWeightDecay(Optimizer): - LearningRateSchedule: Learning rate is dynamic. During training, the optimizer calls the instance of `LearningRateSchedule - `_ + `_ with step as the input to get the learning rate of current step. beta1 (float): The exponential decay rate for the 1st moment estimations. Default: ``0.9`` . @@ -992,7 +992,7 @@ class AdamWeightDecay(Optimizer): >>> from mindspore import nn >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> #1) All parameters use the same learning rate and weight decay >>> optim = nn.AdamWeightDecay(params=net.trainable_params()) @@ -1150,7 +1150,7 @@ class AdamOffload(Optimizer): - LearningRateSchedule: Learning rate is dynamic. During training, the optimizer calls the instance of `LearningRateSchedule - `_ + `_ with step as the input to get the learning rate of current step. beta1 (float): The exponential decay rate for the 1st moment estimations. Should be in range (0.0, 1.0). @@ -1205,7 +1205,7 @@ class AdamOffload(Optimizer): >>> from mindspore import nn >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> #1) All parameters use the same learning rate and weight decay >>> optim = nn.AdamOffload(params=net.trainable_params()) diff --git a/mindspore/python/mindspore/nn/optim/adamax.py b/mindspore/python/mindspore/nn/optim/adamax.py index c284b730d69..0e86b1de04a 100644 --- a/mindspore/python/mindspore/nn/optim/adamax.py +++ b/mindspore/python/mindspore/nn/optim/adamax.py @@ -115,7 +115,7 @@ class AdaMax(Optimizer): - LearningRateSchedule: Learning rate is dynamic. During training, the optimizer calls the instance of `LearningRateSchedule - `_ + `_ with step as the input to get the learning rate of current step. beta1 (float): The exponential decay rate for the 1st moment estimations. Should be in range (0.0, 1.0). @@ -163,7 +163,7 @@ class AdaMax(Optimizer): >>> from mindspore import nn >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> #1) All parameters use the same learning rate and weight decay >>> optim = nn.AdaMax(params=net.trainable_params()) diff --git a/mindspore/python/mindspore/nn/optim/adasum.py b/mindspore/python/mindspore/nn/optim/adasum.py index d92ea757801..3cac157b820 100644 --- a/mindspore/python/mindspore/nn/optim/adasum.py +++ b/mindspore/python/mindspore/nn/optim/adasum.py @@ -445,7 +445,7 @@ class AdaSumByGradWrapCell(Cell): >>> import mindspore as ms >>> from mindspore import nn >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> optim = nn.AdaSumByGradWrapCell(nn.Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9)) >>> loss = nn.SoftmaxCrossEntropyWithLogits() @@ -514,7 +514,7 @@ class AdaSumByDeltaWeightWrapCell(Cell): >>> import mindspore as ms >>> from mindspore import nn >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> optim = nn.AdaSumByDeltaWeightWrapCell(nn.Momentum(params=net.trainable_params(), ... learning_rate=0.1, momentum=0.9)) diff --git a/mindspore/python/mindspore/nn/optim/asgd.py b/mindspore/python/mindspore/nn/optim/asgd.py index b06149a820b..5a6c2d78a8f 100755 --- a/mindspore/python/mindspore/nn/optim/asgd.py +++ b/mindspore/python/mindspore/nn/optim/asgd.py @@ -94,7 +94,7 @@ class ASGD(Optimizer): - LearningRateSchedule: Learning rate is dynamic. During training, the optimizer calls the instance of `LearningRateSchedule - `_ + `_ with step as the input to get the learning rate of current step. lambd (float): The decay term. Default: ``1e-4`` . @@ -130,7 +130,7 @@ class ASGD(Optimizer): >>> from mindspore import nn >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> #1) All parameters use the same learning rate and weight decay >>> optim = nn.ASGD(params=net.trainable_params()) diff --git a/mindspore/python/mindspore/nn/optim/ftrl.py b/mindspore/python/mindspore/nn/optim/ftrl.py index 0d17bcc5342..e786554b726 100644 --- a/mindspore/python/mindspore/nn/optim/ftrl.py +++ b/mindspore/python/mindspore/nn/optim/ftrl.py @@ -275,7 +275,7 @@ class FTRL(Optimizer): >>> from mindspore import nn >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> #1) All parameters use the same learning rate and weight decay >>> optim = nn.FTRL(params=net.trainable_params()) diff --git a/mindspore/python/mindspore/nn/optim/lamb.py b/mindspore/python/mindspore/nn/optim/lamb.py index d88f5e556c1..097643ffd38 100755 --- a/mindspore/python/mindspore/nn/optim/lamb.py +++ b/mindspore/python/mindspore/nn/optim/lamb.py @@ -132,7 +132,7 @@ class Lamb(Optimizer): There is usually no connection between a optimizer and mixed precision. But when `FixedLossScaleManager` is used and `drop_overflow_update` in `FixedLossScaleManager` is set to False, optimizer needs to set the 'loss_scale'. As this optimizer has no argument of `loss_scale`, so `loss_scale` needs to be processed by other means. Refer - document `LossScale `_ to + document `LossScale `_ to process `loss_scale` correctly. If parameters are not grouped, the `weight_decay` in optimizer will be applied on the network parameters without @@ -184,7 +184,7 @@ class Lamb(Optimizer): - LearningRateSchedule: Learning rate is dynamic. During training, the optimizer calls the instance of `LearningRateSchedule - `_ + `_ with step as the input to get the learning rate of current step. beta1 (float): The exponential decay rate for the 1st moment estimations. Default: ``0.9`` . @@ -226,7 +226,7 @@ class Lamb(Optimizer): >>> from mindspore import nn >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> #1) All parameters use the same learning rate and weight decay >>> optim = nn.Lamb(params=net.trainable_params(), learning_rate=0.1) diff --git a/mindspore/python/mindspore/nn/optim/lars.py b/mindspore/python/mindspore/nn/optim/lars.py index f7cef94d6f0..d5090ec2a97 100755 --- a/mindspore/python/mindspore/nn/optim/lars.py +++ b/mindspore/python/mindspore/nn/optim/lars.py @@ -109,7 +109,7 @@ class LARS(Optimizer): >>> from mindspore import nn >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss = nn.SoftmaxCrossEntropyWithLogits() >>> opt = nn.Momentum(net.trainable_params(), 0.1, 0.9) diff --git a/mindspore/python/mindspore/nn/optim/lazyadam.py b/mindspore/python/mindspore/nn/optim/lazyadam.py index 7c2fb85d1fe..fda7fa113f2 100644 --- a/mindspore/python/mindspore/nn/optim/lazyadam.py +++ b/mindspore/python/mindspore/nn/optim/lazyadam.py @@ -334,7 +334,7 @@ class LazyAdam(Optimizer): - LearningRateSchedule: Learning rate is dynamic. During training, the optimizer calls the instance of `LearningRateSchedule - `_ + `_ with step as the input to get the learning rate of current step. beta1 (float): The exponential decay rate for the 1st moment estimations. Should be in range (0.0, 1.0). @@ -390,7 +390,7 @@ class LazyAdam(Optimizer): >>> from mindspore import nn >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> #1) All parameters use the same learning rate and weight decay >>> optim = nn.LazyAdam(params=net.trainable_params()) diff --git a/mindspore/python/mindspore/nn/optim/momentum.py b/mindspore/python/mindspore/nn/optim/momentum.py index 86dd554a0b6..60822ba1190 100755 --- a/mindspore/python/mindspore/nn/optim/momentum.py +++ b/mindspore/python/mindspore/nn/optim/momentum.py @@ -116,7 +116,7 @@ class Momentum(Optimizer): - LearningRateSchedule: Learning rate is dynamic. During training, the optimizer calls the instance of `LearningRateSchedule - `_ + `_ with step as the input to get the learning rate of current step. momentum (float): Hyperparameter of type float, means momentum for the moving average. @@ -161,7 +161,7 @@ class Momentum(Optimizer): >>> from mindspore import nn >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> #1) All parameters use the same learning rate and weight decay >>> optim = nn.Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9) diff --git a/mindspore/python/mindspore/nn/optim/optimizer.py b/mindspore/python/mindspore/nn/optim/optimizer.py index a8cdb19a80e..d2e82ce28f4 100644 --- a/mindspore/python/mindspore/nn/optim/optimizer.py +++ b/mindspore/python/mindspore/nn/optim/optimizer.py @@ -96,7 +96,7 @@ class Optimizer(Cell): - LearningRateSchedule: Learning rate is dynamic. During training, the optimizer calls the instance of `LearningRateSchedule - `_ + `_ with step as the input to get the learning rate of current step. @@ -774,7 +774,7 @@ class Optimizer(Cell): Examples: >>> from mindspore import nn >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> conv_params = list(filter(lambda x: 'conv' in x.name, net.trainable_params())) >>> no_conv_params = list(filter(lambda x: 'conv' not in x.name, net.trainable_params())) diff --git a/mindspore/python/mindspore/nn/optim/proximal_ada_grad.py b/mindspore/python/mindspore/nn/optim/proximal_ada_grad.py index c673973f10d..20a7f1e078d 100644 --- a/mindspore/python/mindspore/nn/optim/proximal_ada_grad.py +++ b/mindspore/python/mindspore/nn/optim/proximal_ada_grad.py @@ -122,7 +122,7 @@ class ProximalAdagrad(Optimizer): - LearningRateSchedule: Learning rate is dynamic. During training, the optimizer calls the instance of `LearningRateSchedule - `_ + `_ with step as the input to get the learning rate of the current step. l1 (float): l1 regularization strength, must be greater than or equal to zero. Default: ``0.0`` . @@ -165,7 +165,7 @@ class ProximalAdagrad(Optimizer): >>> from mindspore import nn >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> #1) All parameters use the same learning rate and weight decay >>> optim = nn.ProximalAdagrad(params=net.trainable_params()) diff --git a/mindspore/python/mindspore/nn/optim/rmsprop.py b/mindspore/python/mindspore/nn/optim/rmsprop.py index f0007878224..f91e12a0fe4 100644 --- a/mindspore/python/mindspore/nn/optim/rmsprop.py +++ b/mindspore/python/mindspore/nn/optim/rmsprop.py @@ -137,7 +137,7 @@ class RMSProp(Optimizer): - LearningRateSchedule: Learning rate is dynamic. During training, the optimizer calls the instance of `LearningRateSchedule - `_ + `_ with step as the input to get the learning rate of the current step. decay (float): Decay rate. Should be equal to or greater than 0. Default: ``0.9`` . @@ -186,7 +186,7 @@ class RMSProp(Optimizer): >>> from mindspore import nn >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> #1) All parameters use the same learning rate and weight decay >>> optim = nn.RMSProp(params=net.trainable_params(), learning_rate=0.1) diff --git a/mindspore/python/mindspore/nn/optim/rprop.py b/mindspore/python/mindspore/nn/optim/rprop.py index 4268ed52ab9..9400e82e07d 100755 --- a/mindspore/python/mindspore/nn/optim/rprop.py +++ b/mindspore/python/mindspore/nn/optim/rprop.py @@ -96,7 +96,7 @@ class Rprop(Optimizer): - LearningRateSchedule: Learning rate is dynamic. During training, the optimizer calls the instance of `LearningRateSchedule - `_ + `_ with step as the input to get the learning rate of current step. etas (tuple[float, float]): The factor of multiplicative increasing or @@ -137,7 +137,7 @@ class Rprop(Optimizer): >>> from mindspore import nn >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> #1) All parameters use the same learning rate and weight decay >>> optim = nn.Rprop(params=net.trainable_params()) diff --git a/mindspore/python/mindspore/nn/optim/sgd.py b/mindspore/python/mindspore/nn/optim/sgd.py index 6b19615303b..e1cbadbfca6 100755 --- a/mindspore/python/mindspore/nn/optim/sgd.py +++ b/mindspore/python/mindspore/nn/optim/sgd.py @@ -103,7 +103,7 @@ class SGD(Optimizer): - LearningRateSchedule: Learning rate is dynamic. During training, the optimizer calls the instance of `LearningRateSchedule - `_ + `_ with step as the input to get the learning rate of current step. momentum (float): A floating point value the momentum. must be at least 0.0. Default: ``0.0`` . @@ -134,7 +134,7 @@ class SGD(Optimizer): >>> from mindspore import nn >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> #1) All parameters use the same learning rate and weight decay >>> optim = nn.SGD(params=net.trainable_params()) diff --git a/mindspore/python/mindspore/nn/optim/thor.py b/mindspore/python/mindspore/nn/optim/thor.py index 70b2f18c262..bc18e577428 100644 --- a/mindspore/python/mindspore/nn/optim/thor.py +++ b/mindspore/python/mindspore/nn/optim/thor.py @@ -339,10 +339,10 @@ def thor(net, learning_rate, damping, momentum, weight_decay=0.0, loss_scale=1.0 >>> from mindspore import Tensor >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> # Create the dataset taking MNIST as an example. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/mnist.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/mnist.py >>> dataset = create_dataset() >>> temp = Tensor([4e-4, 1e-4, 1e-5, 1e-5], mstype.float32) >>> optim = nn.thor(net, learning_rate=temp, damping=temp, momentum=0.9, loss_scale=128, frequency=4) diff --git a/mindspore/python/mindspore/nn/wrap/cell_wrapper.py b/mindspore/python/mindspore/nn/wrap/cell_wrapper.py index 01cd5536306..8ed0cdcc9de 100644 --- a/mindspore/python/mindspore/nn/wrap/cell_wrapper.py +++ b/mindspore/python/mindspore/nn/wrap/cell_wrapper.py @@ -99,7 +99,7 @@ class WithLossCell(Cell): >>> from mindspore import Tensor, nn >>> import numpy as np >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=False) >>> net_with_criterion = nn.WithLossCell(net, loss_fn) @@ -132,7 +132,7 @@ class WithLossCell(Cell): Examples: >>> from mindspore import nn >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=False) >>> net_with_criterion = nn.WithLossCell(net, loss_fn) @@ -175,7 +175,7 @@ class WithGradCell(Cell): >>> import mindspore as ms >>> from mindspore import nn >>> # Defined a network without loss function, taking LeNet5 as an example. - >>> # Refer to https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # Refer to https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss_fn = nn.SoftmaxCrossEntropyWithLogits() >>> grad_net = nn.WithGradCell(net, loss_fn) @@ -346,7 +346,7 @@ class TrainOneStepCell(Cell): Examples: >>> import mindspore.nn as nn >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss_fn = nn.SoftmaxCrossEntropyWithLogits() >>> optim = nn.Momentum(net.trainable_params(), learning_rate=0.1, momentum=0.9) @@ -586,7 +586,7 @@ class MicroBatchInterleaved(Cell): Examples: >>> import mindspore.nn as nn >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> net = nn.MicroBatchInterleaved(net, 2) """ @@ -634,7 +634,7 @@ class PipelineCell(Cell): Examples: >>> import mindspore.nn as nn >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> net = nn.PipelineCell(net, 4) """ @@ -685,7 +685,7 @@ class GradAccumulationCell(Cell): Examples: >>> import mindspore.nn as nn >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> net = nn.GradAccumulationCell(net, 4) """ @@ -811,7 +811,7 @@ class VirtualDatasetCellTriple(Cell): Examples: >>> import mindspore.nn as nn >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> net = nn.VirtualDatasetCellTriple(net) """ @@ -854,7 +854,7 @@ class WithEvalCell(Cell): Examples: >>> import mindspore.nn as nn >>> # Define a forward network without loss function, taking LeNet5 as an example. - >>> # Refer to https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # Refer to https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss_fn = nn.SoftmaxCrossEntropyWithLogits() >>> eval_net = nn.WithEvalCell(net, loss_fn) diff --git a/mindspore/python/mindspore/nn/wrap/grad_reducer.py b/mindspore/python/mindspore/nn/wrap/grad_reducer.py index 76d2940cb83..1f03bdc67f0 100644 --- a/mindspore/python/mindspore/nn/wrap/grad_reducer.py +++ b/mindspore/python/mindspore/nn/wrap/grad_reducer.py @@ -335,14 +335,14 @@ class DistributedGradReducer(Cell): For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the `rank table Startup - `_ + `_ for more details. For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup - `_ . + `_ . For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster - Startup `_ . + Startup `_ . This example should be run with multiple devices. @@ -509,11 +509,11 @@ class PipelineGradReducer(Cell): For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the `rank table Startup - `_ + `_ for more details. For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup - `_ . + `_ . This example should be run with multiple devices. diff --git a/mindspore/python/mindspore/ops/function/array_func.py b/mindspore/python/mindspore/ops/function/array_func.py index cbe7744ba87..25385fec34d 100644 --- a/mindspore/python/mindspore/ops/function/array_func.py +++ b/mindspore/python/mindspore/ops/function/array_func.py @@ -263,11 +263,11 @@ def concat(tensors, axis=0): Alias for :func:`mindspore.ops.cat()`. Tutorial Examples: - - `Tensor - Tensor Operation `_ + - `Tensor - Tensor Operation `_ - `Vision Transformer Image Classification - Building ViT as a whole - `_ + `_ - `Sentiment Classification Implemented by RNN - Dense - `_ + `_ """ return cat(tensors, axis) @@ -648,8 +648,8 @@ def fill(type, shape, value): # pylint: disable=redefined-outer-name Args: type (mindspore.dtype): The specified type of output tensor. The data type only supports - `bool_ `_ and - `number `_ . + `bool_ `_ and + `number `_ . shape (Union(Tensor, tuple[int])): The specified shape of output tensor. value (Union(Tensor, number.Number, bool)): Value to fill the returned tensor. @@ -1273,7 +1273,7 @@ def size(input_x): Args: input_x (Tensor): Input parameters, the shape of tensor is :math:`(x_1, x_2, ..., x_R)`. The data type is - `number `_. + `number `_. Returns: int. A scalar representing the elements' size of `input_x`, tensor is the number of elements diff --git a/mindspore/python/mindspore/ops/function/grad/grad_func.py b/mindspore/python/mindspore/ops/function/grad/grad_func.py index 133db9c1359..3106225079e 100644 --- a/mindspore/python/mindspore/ops/function/grad/grad_func.py +++ b/mindspore/python/mindspore/ops/function/grad/grad_func.py @@ -660,7 +660,7 @@ def jvp(fn, inputs, v, has_aux=False): """ Compute the jacobian-vector-product of the given network. `jvp` matches `forward-mode differentiation - `_. + `_. Args: fn (Union[Function, Cell]): The function or net that takes Tensor inputs and returns single Tensor or tuple of @@ -875,7 +875,7 @@ def vjp(fn, *inputs, weights=None, has_aux=False): """ Compute the vector-jacobian-product of the given network. `vjp` matches `reverse-mode differentiation - `_. + `_. Args: fn (Union[Function, Cell]): The function or net that takes Tensor inputs and returns single Tensor or tuple of @@ -1074,7 +1074,7 @@ def jacfwd(fn, grad_position=0, has_aux=False): """ Compute Jacobian via forward mode, corresponding to `forward-mode differentiation - `_. + `_. When number of outputs is much greater than that of inputs, it's better to calculate Jacobian via forward mode than reverse mode to get better performance. @@ -1245,7 +1245,7 @@ def jacrev(fn, grad_position=0, has_aux=False): """ Compute Jacobian via reverse mode, corresponding to `reverse-mode differentiation - `_. + `_. When number of inputs is much greater than that of outputs, it's better to calculate Jacobian via reverse mode than forward mode to get better performance. @@ -1376,7 +1376,7 @@ def stop_gradient(value): StopGradient is used for eliminating the effect of a value on the gradient, such as truncating the gradient propagation from an output of a function. For more details, please refer to `Stop Gradient - `_. + `_. Args: value (Any): The value whose effect on the gradient to be eliminated. diff --git a/mindspore/python/mindspore/ops/function/linalg_func.py b/mindspore/python/mindspore/ops/function/linalg_func.py index dd6d99cd5c3..2667d1aa31d 100644 --- a/mindspore/python/mindspore/ops/function/linalg_func.py +++ b/mindspore/python/mindspore/ops/function/linalg_func.py @@ -250,7 +250,7 @@ def pinv(x, *, atol=None, rtol=None, hermitian=False): Batch matrices are supported. If x is a batch matrix, the output has the same batch dimension when atol or rtol is float. If atol or rtol is a Tensor, its shape must be broadcast to the singular value returned by - `x.svd `_ . + `x.svd `_ . If x.shape is :math:`(B, M, N)`, and the shape of atol or rtol is :math:`(K, B)`, the output shape is :math:`(K, B, N, M)`. When the Hermitian is True, temporary support only real domain, x is treated as a real symmetric, so x is @@ -260,13 +260,13 @@ def pinv(x, *, atol=None, rtol=None, hermitian=False): characteristic value), it is set to zero, and is not used in the computations. If rtol is not specified and x is a matrix of dimensions (M, N), then rtol is set to be :math:`rtol=max(M, N)*\varepsilon`, :math:`\varepsilon` is the - `eps `_ value of x.dtype. + `eps `_ value of x.dtype. If rtol is not specified and atol specifies a value larger than zero, rtol is set to zero. .. note:: This function uses - `svd `_ internally, - (or `eigh `_ , + `svd `_ internally, + (or `eigh `_ , when `hermitian = True` ). So it has the same problem as these functions. For details, see the warnings in svd() and eigh(). diff --git a/mindspore/python/mindspore/ops/function/math_func.py b/mindspore/python/mindspore/ops/function/math_func.py index cdeec3d74cc..c47b0dce38a 100644 --- a/mindspore/python/mindspore/ops/function/math_func.py +++ b/mindspore/python/mindspore/ops/function/math_func.py @@ -1652,8 +1652,8 @@ def xlogy(input, other): Args: input (Union[Tensor, number.Number, bool]): The first input is a number.Number or a bool or a tensor whose data type is - `number `_ or - `bool_ `_. + `number `_ or + `bool_ `_. other (Union[Tensor, number.Number, bool]): The second input is a number.Number or a bool when the first input is a tensor or a tensor whose data type is number or bool\_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool\_. @@ -3015,8 +3015,8 @@ def le(input, other): Args: input (Union[Tensor, number.Number, bool]): The first input is a number.Number or a bool or a tensor whose data type is - `number `_ or - `bool_ `_. + `number `_ or + `bool_ `_. other (Union[Tensor, number.Number, bool]): The second input, when the first input is a Tensor, the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool\_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool\_. @@ -3065,8 +3065,8 @@ def gt(input, other): Args: input (Union[Tensor, number.Number, bool]): The first input is a number.Number or a bool or a tensor whose data type is - `number `_ or - `bool_ `_ . + `number `_ or + `bool_ `_ . other (Union[Tensor, number.Number, bool]): The second input, when the first input is a Tensor, the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool\_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool\_. diff --git a/mindspore/python/mindspore/ops/function/sparse_unary_func.py b/mindspore/python/mindspore/ops/function/sparse_unary_func.py index 9dc86a14798..095e4d25e2d 100755 --- a/mindspore/python/mindspore/ops/function/sparse_unary_func.py +++ b/mindspore/python/mindspore/ops/function/sparse_unary_func.py @@ -375,7 +375,7 @@ def coo_relu(x: COOTensor) -> COOTensor: Args: x (COOTensor): Input COOTensor with shape :math:`(N, *)`, where :math:`*` means any number of additional dimensions. Its dtype is - `number `_. + `number `_. Returns: COOTensor, has the same shape and dtype as the `x`. diff --git a/mindspore/python/mindspore/ops/function/vmap_func.py b/mindspore/python/mindspore/ops/function/vmap_func.py index b2ec249ad81..f1bb9294db1 100644 --- a/mindspore/python/mindspore/ops/function/vmap_func.py +++ b/mindspore/python/mindspore/ops/function/vmap_func.py @@ -27,7 +27,7 @@ def vmap(fn, in_axes=0, out_axes=0): Vmap is pioneered by Jax and it removes the restriction of batch dimension on the operator, and provides a more convenient and unified operator expression. Moreover, it allows users to composite with other functional modules such as :func:`mindspore.grad`, to improve the development efficiency, please refer to the - `Automatic Vectorization (Vmap) `_ tutorial + `Automatic Vectorization (Vmap) `_ tutorial for more detail. In addition, the vectorizing map does not execute loops outside the function, but sinks loops into the primitive operations of the function for better performance. When combined with `Graph Kernel Fusion`, operational efficiency would be further improved. diff --git a/mindspore/python/mindspore/ops/op_info_register.py b/mindspore/python/mindspore/ops/op_info_register.py index f0b86a971de..21b038051f5 100644 --- a/mindspore/python/mindspore/ops/op_info_register.py +++ b/mindspore/python/mindspore/ops/op_info_register.py @@ -1050,7 +1050,7 @@ class CustomRegOp(RegOp): Tutorial Examples: - `Custom Operators (Custom-based) - Defining Custom Operator of aicpu Type - `_ """ param_list = [index, name, param_type] @@ -1090,7 +1090,7 @@ class CustomRegOp(RegOp): Tutorial Examples: - `Custom Operators (Custom-based) - Defining Custom Operator of aicpu Type - `_ """ param_list = [index, name, param_type] @@ -1118,7 +1118,7 @@ class CustomRegOp(RegOp): Tutorial Examples: - `Custom Operators (Custom-based) - Defining Custom Operator of aicpu Type - `_ """ io_nums = len(self.inputs) + len(self.outputs) @@ -1175,7 +1175,7 @@ class CustomRegOp(RegOp): Tutorial Examples: - `Custom Operators (Custom-based) - Defining Custom Operator of aicpu Type - `_ """ param_list = [name, param_type, value_type, default_value] @@ -1201,7 +1201,7 @@ class CustomRegOp(RegOp): Tutorial Examples: - `Custom Operators (Custom-based) - Defining Custom Operator of aicpu Type - `_ """ if target is not None: @@ -1216,7 +1216,7 @@ class CustomRegOp(RegOp): Tutorial Examples: - `Custom Operators (Custom-based) - Defining Custom Operator of aicpu Type - `_ """ op_info = {} diff --git a/mindspore/python/mindspore/ops/operations/_inner_ops.py b/mindspore/python/mindspore/ops/operations/_inner_ops.py index 62ec93c6701..066673a9f3c 100755 --- a/mindspore/python/mindspore/ops/operations/_inner_ops.py +++ b/mindspore/python/mindspore/ops/operations/_inner_ops.py @@ -2818,14 +2818,14 @@ class CollectiveGather(Primitive): For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the `rank table Startup - `_ + `_ for more details. For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup - `_ . + `_ . For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster - Startup `_ . + Startup `_ . This example should be run with 4 devices. diff --git a/mindspore/python/mindspore/ops/operations/array_ops.py b/mindspore/python/mindspore/ops/operations/array_ops.py index e9f4a8d593f..1dd7933c11a 100755 --- a/mindspore/python/mindspore/ops/operations/array_ops.py +++ b/mindspore/python/mindspore/ops/operations/array_ops.py @@ -817,7 +817,7 @@ class Size(Primitive): Inputs: - **input_x** (Tensor) - Input parameters, the shape of tensor is :math:`(x_1, x_2, ..., x_R)`. The data type is - `number `_. + `number `_. Outputs: int. A scalar representing the elements' size of `input_x`, tensor is the number of elements diff --git a/mindspore/python/mindspore/ops/operations/comm_ops.py b/mindspore/python/mindspore/ops/operations/comm_ops.py index 6b06168a6e0..d967d19da23 100644 --- a/mindspore/python/mindspore/ops/operations/comm_ops.py +++ b/mindspore/python/mindspore/ops/operations/comm_ops.py @@ -53,14 +53,14 @@ class ReduceOp: For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the `rank table Startup - `_ + `_ for more details. For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup - `_ . + `_ . For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster - Startup `_ . + Startup `_ . This example should be run with multiple devices. @@ -144,14 +144,14 @@ class AllReduce(Primitive): For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the `rank table Startup - `_ + `_ for more details. For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup - `_ . + `_ . For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster - Startup `_ . + Startup `_ . This example should be run with 2 devices. @@ -180,7 +180,7 @@ class AllReduce(Primitive): Tutorial Examples: - `Distributed Set Communication Primitives - AllReduce - `_ + `_ """ @@ -233,14 +233,14 @@ class AllGather(PrimitiveWithInfer): For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the `rank table Startup - `_ + `_ for more details. For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup - `_ . + `_ . For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster - Startup `_ . + Startup `_ . This example should be run with 2 devices. @@ -272,7 +272,7 @@ class AllGather(PrimitiveWithInfer): Tutorial Examples: - `Distributed Set Communication Primitives - AllGather - `_ + `_ """ @@ -458,14 +458,14 @@ class ReduceScatter(Primitive): For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the `rank table Startup - `_ + `_ for more details. For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup - `_ . + `_ . For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster - Startup `_ . + Startup `_ . This example should be run with 2 devices. @@ -498,7 +498,7 @@ class ReduceScatter(Primitive): Tutorial Examples: - `Distributed Set Communication Primitives - ReduceScatter - `_ + `_ """ @@ -600,14 +600,14 @@ class Broadcast(PrimitiveWithInfer): For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the `rank table Startup - `_ + `_ for more details. For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup - `_ . + `_ . For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster - Startup `_ . + Startup `_ . This example should be run with multiple devices. @@ -638,7 +638,7 @@ class Broadcast(PrimitiveWithInfer): Tutorial Examples: - `Distributed Set Communication Primitives - Broadcast - `_ + `_ """ @@ -718,11 +718,11 @@ class NeighborExchange(Primitive): The user needs to preset communication environment variables before running the following example, please check the details on the official website of `MindSpore \ - `_. + `_. This operator requires a full-mesh network topology, each device has the same vlan id, and the ip & mask are in the same subnet, please check the `details \ - `_. + `_. Args: send_rank_ids (list(int)): Ranks which the data is sent to. @@ -771,7 +771,7 @@ class NeighborExchange(Primitive): Tutorial Examples: - `Distributed Set Communication Primitives - NeighborExchange - `_ + `_ """ @@ -804,7 +804,7 @@ class AlltoAll(PrimitiveWithInfer): Note: This operator requires a full-mesh network topology, each device has the same vlan id, and the ip & mask are in the same subnet, please check the `details \ - `_. + `_. Args: split_count (int): On each process, divide blocks into split_count number. @@ -835,14 +835,14 @@ class AlltoAll(PrimitiveWithInfer): For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the `rank table Startup - `_ + `_ for more details. For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup - `_ . + `_ . For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster - Startup `_ . + Startup `_ . This example should be run with 8 devices. @@ -873,7 +873,7 @@ class AlltoAll(PrimitiveWithInfer): Tutorial Examples: - `Distributed Set Communication Primitives - AlltoAll - `_ + `_ """ @@ -921,7 +921,7 @@ class NeighborExchangeV2(Primitive): Note: This operator requires a full-mesh network topology, each device has the same vlan id, and the ip & mask are in the same subnet, please check the `details \ - `_. + `_. Args: send_rank_ids (list(int)): Ranks which the data is sent to. 8 rank_ids represents 8 directions, if one @@ -959,14 +959,14 @@ class NeighborExchangeV2(Primitive): For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the `rank table Startup - `_ + `_ for more details. For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup - `_ . + `_ . For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster - Startup `_ . + Startup `_ . This example should be run with 2 devices. @@ -1017,7 +1017,7 @@ class NeighborExchangeV2(Primitive): Tutorial Examples: - `Distributed Set Communication Primitives - NeighborExchangeV2 - `_ + `_ """ diff --git a/mindspore/python/mindspore/ops/operations/custom_ops.py b/mindspore/python/mindspore/ops/operations/custom_ops.py index fd3c3aa04e1..601481112d0 100644 --- a/mindspore/python/mindspore/ops/operations/custom_ops.py +++ b/mindspore/python/mindspore/ops/operations/custom_ops.py @@ -164,7 +164,7 @@ class Custom(ops.PrimitiveWithInfer): function if needed. Then these `Custom` objects can be directly used in neural networks. Detailed description and introduction of user-defined operators, including correct writing of parameters, please refer to `Custom Operators Tutorial - `_ . + `_ . .. warning:: - This is an experimental API that is subject to change. diff --git a/mindspore/python/mindspore/ops/operations/math_ops.py b/mindspore/python/mindspore/ops/operations/math_ops.py index f4aa56a190e..a1cfef9b57f 100644 --- a/mindspore/python/mindspore/ops/operations/math_ops.py +++ b/mindspore/python/mindspore/ops/operations/math_ops.py @@ -1165,8 +1165,8 @@ class Sub(_MathBinaryOp): Inputs: - **x** (Union[Tensor, number.Number, bool]) - The first input is a number.Number or a bool or a tensor whose data type is - `number `_ or - `bool_ `_. + `number `_ or + `bool_ `_. - **y** (Union[Tensor, number.Number, bool]) - The second input, when the first input is a Tensor, the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool. @@ -1564,8 +1564,8 @@ class DivNoNan(Primitive): Inputs: - **x1** (Union[Tensor, number.Number, bool]) - The first input is a number.Number or a bool or a tensor whose data type is - `number `_ or - `bool_ `_. + `number `_ or + `bool_ `_. - **x2** (Union[Tensor, number.Number, bool]) - The second input is a number.Number or a bool when the first input is a bool or a tensor whose data type is number or bool\_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool\_. @@ -1937,8 +1937,8 @@ class Xlogy(Primitive): Inputs: - **x** (Union[Tensor, number.Number, bool]) - The first input is a number.Number or a bool or a tensor whose data type is - `number `_ or - `bool_ `_. + `number `_ or + `bool_ `_. - **y** (Union[Tensor, number.Number, bool]) - The second input is a number.Number or a bool when the first input is a tensor or a tensor whose data type is number or bool\_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool\_. diff --git a/mindspore/python/mindspore/ops/operations/nn_ops.py b/mindspore/python/mindspore/ops/operations/nn_ops.py index 52096707361..f38323be87e 100644 --- a/mindspore/python/mindspore/ops/operations/nn_ops.py +++ b/mindspore/python/mindspore/ops/operations/nn_ops.py @@ -449,7 +449,7 @@ class ReLUV3(Primitive): Inputs: - **input_x** (Tensor) - Tensor of shape :math:`(N, *)`, where :math:`*` means, any number of additional dimensions, data type is - `number `_. + `number `_. Outputs: Tensor of shape :math:`(N, *)`, with the same type and shape as the `input_x`. diff --git a/mindspore/python/mindspore/ops/primitive.py b/mindspore/python/mindspore/ops/primitive.py index 09214bcc2aa..4374af59839 100644 --- a/mindspore/python/mindspore/ops/primitive.py +++ b/mindspore/python/mindspore/ops/primitive.py @@ -548,7 +548,7 @@ class PrimitiveWithCheck(Primitive): the shape and type. Method infer_value() can also be defined (such as PrimitiveWithInfer) for constant propagation. More on how to customize a Op, please refer to `Custom Operators - `_. + `_. Args: name (str): Name of the current Primitive. @@ -642,7 +642,7 @@ class PrimitiveWithInfer(Primitive): logic of the shape and type. The infer_value() is used for constant propagation. More on how to customize a Op, please refer to `Custom Operators - `_. + `_. Args: name (str): Name of the current Primitive. diff --git a/mindspore/python/mindspore/parallel/algo_parameter_config.py b/mindspore/python/mindspore/parallel/algo_parameter_config.py index 74f1dd4ba5f..76174e97383 100644 --- a/mindspore/python/mindspore/parallel/algo_parameter_config.py +++ b/mindspore/python/mindspore/parallel/algo_parameter_config.py @@ -229,7 +229,7 @@ def set_algo_parameters(**kwargs): """ Set parameters in the algorithm for parallel strategy searching. See a typical use in `test_auto_parallel_resnet.py - `_. + `_. Note: The attribute name is required. This interface works ONLY in AUTO_PARALLEL mode. @@ -266,14 +266,14 @@ def set_algo_parameters(**kwargs): For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the `rank table startup - `_ + `_ for more details. For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun startup - `_ . + `_ . For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster - Startup `_ . + Startup `_ . >>> import numpy as np >>> import mindspore as ms diff --git a/mindspore/python/mindspore/parallel/checkpoint_transform.py b/mindspore/python/mindspore/parallel/checkpoint_transform.py index 57be1de419b..3a8aa775779 100644 --- a/mindspore/python/mindspore/parallel/checkpoint_transform.py +++ b/mindspore/python/mindspore/parallel/checkpoint_transform.py @@ -37,7 +37,7 @@ def merge_pipeline_strategys(src_strategy_dirs, dst_strategy_file): """ Merge parallel strategy between all pipeline stages in pipeline parallel mode. For more details about converting distributed Checkpoint, please refer to - `Model Transformation `_. + `Model Transformation `_. Note: Strategy file of each pipeline stage should be included in src_strategy_dirs. @@ -77,7 +77,7 @@ def rank_list_for_transform(rank_id, src_strategy_file=None, dst_strategy_file=N """ List of original distributed checkpoint rank index for obtaining the target checkpoint of a rank_id during the distributed checkpoint conversion. For more details about converting distributed Checkpoint, please refer to - `Model Transformation `_. + `Model Transformation `_. Args: rank_id (int): The rank of which distributed checkpoint needs to be obtained after conversion. @@ -140,7 +140,7 @@ def transform_checkpoint_by_rank(rank_id, checkpoint_files_map, save_checkpoint_ """ Transform distributed checkpoint from source sharding strategy to destination sharding strategy by rank for a network. For more details about converting distributed Checkpoint, please refer to - `Model Transformation `_. + `Model Transformation `_. Args: rank_id (int): The rank of which distributed checkpoint needs to be obtained after conversion. @@ -230,7 +230,7 @@ def transform_checkpoints(src_checkpoints_dir, dst_checkpoints_dir, ckpt_prefix, """ Transform distributed checkpoint from source sharding strategy to destination sharding strategy for a rank. For more details about converting distributed Checkpoint, please refer to - `Model Transformation `_. + `Model Transformation `_. Note: The `src_checkpoints_dir` directory structure should be organized like "src_checkpoints_dir/rank_0/a.ckpt", the diff --git a/mindspore/python/mindspore/parallel/parameter_broadcast.py b/mindspore/python/mindspore/parallel/parameter_broadcast.py index 8ee43c2000d..60f1defabe3 100644 --- a/mindspore/python/mindspore/parallel/parameter_broadcast.py +++ b/mindspore/python/mindspore/parallel/parameter_broadcast.py @@ -84,7 +84,7 @@ def parameter_broadcast(net, layout, cur_rank=0, initial_rank=0): >>> net.matmul2.shard(((1, 8), (8, 1))) >>> net.relu2.shard(((8, 1),)) >>> # Create the dataset taking MNIST as an example. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/mnist.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/mnist.py >>> dataset = create_dataset() >>> optim = nn.SGD(net.trainable_params(), 1e-2) >>> loss = nn.CrossEntropyLoss() diff --git a/mindspore/python/mindspore/parallel/shard.py b/mindspore/python/mindspore/parallel/shard.py index 621dcc06eb4..36ef176b4f9 100644 --- a/mindspore/python/mindspore/parallel/shard.py +++ b/mindspore/python/mindspore/parallel/shard.py @@ -328,7 +328,7 @@ def shard(fn, in_strategy, out_strategy=None, parameter_plan=None, device="Ascen Tutorial Examples: - `Functional Operator Sharding - `_ + `_ """ if not isinstance(fn, (ms.nn.Cell)): logger.warning("'fn' is not a mindspore.nn.Cell, and its definition cannot involve Parameter; " diff --git a/mindspore/python/mindspore/profiler/profiling.py b/mindspore/python/mindspore/profiler/profiling.py index ccc2057e925..51f870a1078 100644 --- a/mindspore/python/mindspore/profiler/profiling.py +++ b/mindspore/python/mindspore/profiler/profiling.py @@ -662,12 +662,12 @@ class Profiler: >>> # Profiler init. >>> profiler = Profiler() >>> # Train Model or eval Model, taking LeNet5 as an example. - >>> # Refer to https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # Refer to https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> optimizer = nn.Momentum(net.trainable_params(), learning_rate=0.1, momentum=0.9) >>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True) >>> # Create the dataset taking MNIST as an example. - >>> # Refer to https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/mnist.py + >>> # Refer to https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/mnist.py >>> dataloader = create_dataset() >>> model = Model(net, loss, optimizer) >>> model.train(5, dataloader, dataset_sink_mode=False) diff --git a/mindspore/python/mindspore/rewrite/api/node.py b/mindspore/python/mindspore/rewrite/api/node.py index b2efcb73810..3201ecc03d5 100644 --- a/mindspore/python/mindspore/rewrite/api/node.py +++ b/mindspore/python/mindspore/rewrite/api/node.py @@ -89,7 +89,7 @@ class Node: >>> from mindspore.rewrite import SymbolTree, ScopedValue >>> import mindspore.nn as nn >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> stree = SymbolTree.create(net) >>> node = stree.get_node("conv1") @@ -144,7 +144,7 @@ class Node: >>> import mindspore.nn as nn >>> import mindspore.ops as ops >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> stree = SymbolTree.create(net) >>> node = stree.get_node("conv1") @@ -184,7 +184,7 @@ class Node: Examples: >>> from mindspore.rewrite import SymbolTree >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> stree = SymbolTree.create(net) >>> node = stree.get_node("conv2") @@ -204,7 +204,7 @@ class Node: Examples: >>> from mindspore.rewrite import SymbolTree >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> stree = SymbolTree.create(net) >>> node = stree.get_node("conv1") @@ -229,7 +229,7 @@ class Node: Examples: >>> from mindspore.rewrite import SymbolTree >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> stree = SymbolTree.create(net) >>> node = stree.get_node("relu_3") @@ -267,7 +267,7 @@ class Node: Examples: >>> from mindspore.rewrite import SymbolTree >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> stree = SymbolTree.create(net) >>> src_node = stree.get_node("fc1") @@ -307,7 +307,7 @@ class Node: Examples: >>> from mindspore.rewrite import SymbolTree >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> stree = SymbolTree.create(net) >>> node = stree.get_node("conv1") @@ -327,7 +327,7 @@ class Node: Examples: >>> from mindspore.rewrite import SymbolTree >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> stree = SymbolTree.create(net) >>> node = stree.get_node("conv1") @@ -354,7 +354,7 @@ class Node: Examples: >>> from mindspore.rewrite import SymbolTree >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> stree = SymbolTree.create(net) >>> node = stree.get_node("conv1") @@ -377,7 +377,7 @@ class Node: Examples: >>> from mindspore.rewrite import SymbolTree >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> stree = SymbolTree.create(net) >>> node = stree.get_node("conv1") @@ -396,7 +396,7 @@ class Node: Examples: >>> from mindspore.rewrite import SymbolTree >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> stree = SymbolTree.create(net) >>> node = stree.get_node("conv1") diff --git a/mindspore/python/mindspore/rewrite/api/symbol_tree.py b/mindspore/python/mindspore/rewrite/api/symbol_tree.py index 0ba12a4e885..9942a2e55eb 100644 --- a/mindspore/python/mindspore/rewrite/api/symbol_tree.py +++ b/mindspore/python/mindspore/rewrite/api/symbol_tree.py @@ -119,7 +119,7 @@ class SymbolTree: Examples: >>> from mindspore.rewrite import SymbolTree >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> stree = SymbolTree.create(net) >>> print(type(stree)) @@ -163,7 +163,7 @@ class SymbolTree: Examples: >>> from mindspore.rewrite import SymbolTree >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> stree = SymbolTree.create(net) >>> print([node.get_name() for node in stree.nodes()]) @@ -188,7 +188,7 @@ class SymbolTree: Examples: >>> from mindspore.rewrite import SymbolTree >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> stree = SymbolTree.create(net) >>> node = stree.get_node('conv1') @@ -221,7 +221,7 @@ class SymbolTree: Examples: >>> from mindspore.rewrite import SymbolTree >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> stree = SymbolTree.create(net) >>> for node in stree.nodes(): @@ -250,7 +250,7 @@ class SymbolTree: Examples: >>> from mindspore.rewrite import SymbolTree >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> stree = SymbolTree.create(net) >>> for node in stree.nodes(): @@ -284,7 +284,7 @@ class SymbolTree: >>> from mindspore.rewrite import SymbolTree, ScopedValue >>> import mindspore.nn as nn >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> stree = SymbolTree.create(net) >>> node = stree.get_node("conv1") @@ -313,7 +313,7 @@ class SymbolTree: Examples: >>> from mindspore.rewrite import SymbolTree >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> stree = SymbolTree.create(net) >>> node = stree.get_node("conv1") @@ -351,7 +351,7 @@ class SymbolTree: >>> from mindspore.rewrite import SymbolTree, ScopedValue >>> import mindspore.nn as nn >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> stree = SymbolTree.create(net) >>> node = stree.get_node("conv1") @@ -397,7 +397,7 @@ class SymbolTree: Examples: >>> from mindspore.rewrite import SymbolTree >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> stree = SymbolTree.create(net) >>> stree.print_node_tabulate() @@ -417,7 +417,7 @@ class SymbolTree: Examples: >>> from mindspore.rewrite import SymbolTree >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> stree = SymbolTree.create(net) >>> codes = stree.get_code() @@ -444,7 +444,7 @@ class SymbolTree: Examples: >>> from mindspore.rewrite import SymbolTree >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> stree = SymbolTree.create(net) >>> new_net = stree.get_network() diff --git a/mindspore/python/mindspore/run_check/_check_version.py b/mindspore/python/mindspore/run_check/_check_version.py index e6cd94d1976..cbf52c801c2 100644 --- a/mindspore/python/mindspore/run_check/_check_version.py +++ b/mindspore/python/mindspore/run_check/_check_version.py @@ -527,7 +527,7 @@ def check_version_and_env_config(): except OSError: logger.warning("Pre-Load Library libgomp.so.1 failed, which might cause TLS memory allocation failure. If " "the failure occurs, please refer to the FAQ for a solution: " - "https://www.mindspore.cn/docs/en/r2.3.q1/faq/installation.html.") + "https://www.mindspore.cn/docs/en/master/faq/installation.html.") MSContext.get_instance().register_check_env_callback(check_env) MSContext.get_instance().register_set_env_callback(set_env) MSContext.get_instance().set_device_target_inner(MSContext.get_instance().get_param(ms_ctx_param.device_target)) diff --git a/mindspore/python/mindspore/safeguard/rewrite_obfuscation.py b/mindspore/python/mindspore/safeguard/rewrite_obfuscation.py index 14369d10371..7bb004423dd 100644 --- a/mindspore/python/mindspore/safeguard/rewrite_obfuscation.py +++ b/mindspore/python/mindspore/safeguard/rewrite_obfuscation.py @@ -73,7 +73,7 @@ def obfuscate_ckpt(network, ckpt_files, target_modules=None, saved_path='./', ob Examples: >>> from mindspore import obfuscate_ckpt, save_checkpoint - >>> # Refer to https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # Refer to https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> save_checkpoint(net, './test_net.ckpt') >>> target_modules = ['', 'fc1|fc2'] @@ -204,7 +204,7 @@ def load_obf_params_into_net(network, target_modules, obf_ratios, data_parallel_ >>> from mindspore import obfuscate_ckpt, save_checkpoint, load_checkpoint, Tensor >>> import mindspore.common.dtype as mstype >>> import numpy as np - >>> # Refer to https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # Refer to https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> save_checkpoint(net, './test_net.ckpt') >>> target_modules = ['', 'fc1|fc2'] diff --git a/mindspore/python/mindspore/train/amp.py b/mindspore/python/mindspore/train/amp.py index e7a3d314bc7..bc7c880bc85 100644 --- a/mindspore/python/mindspore/train/amp.py +++ b/mindspore/python/mindspore/train/amp.py @@ -331,7 +331,7 @@ def auto_mixed_precision(network, amp_level="O0", dtype=mstype.float16): :class:`mindspore.nn.LayerNorm`] For details on automatic mixed precision, refer to - `Automatic Mix Precision `_ . + `Automatic Mix Precision `_ . Note: - Repeatedly calling mixed-precision interfaces, such as `custom_mixed_precision` and `auto_mixed_precision`, @@ -362,7 +362,7 @@ def auto_mixed_precision(network, amp_level="O0", dtype=mstype.float16): Examples: >>> from mindspore import amp >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> network = LeNet5() >>> amp_level = "O1" >>> net = amp.auto_mixed_precision(network, amp_level) @@ -597,7 +597,7 @@ def build_train_network(network, optimizer, loss_fn=None, level='O0', boost_leve Examples: >>> from mindspore import amp, nn >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> network = LeNet5() >>> net_loss = nn.SoftmaxCrossEntropyWithLogits(reduction="mean") >>> net_opt = nn.Momentum(network.trainable_params(), learning_rate=0.01, momentum=0.9) @@ -744,7 +744,7 @@ def custom_mixed_precision(network, *, white_list=None, black_list=None, dtype=m Examples: >>> from mindspore import amp, nn >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> custom_white_list = amp.get_white_list() >>> custom_white_list.append(nn.Flatten) diff --git a/mindspore/python/mindspore/train/callback/_backup_and_restore.py b/mindspore/python/mindspore/train/callback/_backup_and_restore.py index 5d6454fd31e..8d0b7c04f70 100644 --- a/mindspore/python/mindspore/train/callback/_backup_and_restore.py +++ b/mindspore/python/mindspore/train/callback/_backup_and_restore.py @@ -50,13 +50,13 @@ class BackupAndRestore(Callback): >>> from mindspore.train import Model, BackupAndRestore, RunContext >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean') >>> optim = nn.Momentum(net.trainable_params(), 0.01, 0.9) >>> model = Model(net, loss_fn=loss, optimizer=optim) >>> # Create the dataset taking MNIST as an example. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/mnist.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/mnist.py >>> dataset = create_dataset() >>> backup_ckpt = BackupAndRestore("backup") >>> model.train(10, dataset, callbacks=backup_ckpt) diff --git a/mindspore/python/mindspore/train/callback/_callback.py b/mindspore/python/mindspore/train/callback/_callback.py index 7c621818a90..1b2433715e0 100644 --- a/mindspore/python/mindspore/train/callback/_callback.py +++ b/mindspore/python/mindspore/train/callback/_callback.py @@ -123,7 +123,7 @@ class Callback: recording current attributes. Users can add custimized attributes to the information. Training process can also be stopped by calling `request_stop` method. For details of custom Callback, please check - `Callback tutorial `_. Examples: @@ -493,7 +493,7 @@ class RunContext: `RunContext.original_args()` and add extra attributes to the information, but also can stop the training process by calling `request_stop` method. For details of custom Callback, please check - `Callback Mechanism `_. + `Callback Mechanism `_. `RunContext.original_args()` holds the model context information as a dictionary variable, and different attributes of the dictionary are stored in training or eval process. Details are as follows: @@ -575,7 +575,7 @@ class RunContext: Tutorial Examples: - `Callback Mechanism - Customized Callback Mechanism - `_ + `_ """ return self._original_args @@ -588,7 +588,7 @@ class RunContext: Tutorial Examples: - `Callback Mechanism - Customized Training Termination Time - `_ """ self._stop_requested = True diff --git a/mindspore/python/mindspore/train/callback/_checkpoint.py b/mindspore/python/mindspore/train/callback/_checkpoint.py index de072b6ae39..16f04658b9b 100644 --- a/mindspore/python/mindspore/train/callback/_checkpoint.py +++ b/mindspore/python/mindspore/train/callback/_checkpoint.py @@ -114,13 +114,13 @@ class CheckpointConfig: >>> from mindspore.train import Model, CheckpointConfig, ModelCheckpoint >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean') >>> optim = nn.Momentum(net.trainable_params(), 0.01, 0.9) >>> model = Model(net, loss_fn=loss, optimizer=optim) >>> # Create the dataset taking MNIST as an example. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/mnist.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/mnist.py >>> dataset = create_dataset() >>> config = CheckpointConfig(save_checkpoint_seconds=100, keep_checkpoint_per_n_minutes=5, saved_network=net) >>> config.save_checkpoint_steps diff --git a/mindspore/python/mindspore/train/callback/_early_stop.py b/mindspore/python/mindspore/train/callback/_early_stop.py index a1e0343a623..b81b3e3ed75 100644 --- a/mindspore/python/mindspore/train/callback/_early_stop.py +++ b/mindspore/python/mindspore/train/callback/_early_stop.py @@ -85,13 +85,13 @@ class EarlyStopping(Callback): >>> from mindspore import nn >>> from mindspore.train import Model, EarlyStopping >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean') >>> optim = nn.Momentum(net.trainable_params(), 0.01, 0.9) >>> model = Model(net, loss_fn=loss, optimizer=optim, metrics={"acc"}) >>> # Create the dataset taking MNIST as an example. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/mnist.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/mnist.py >>> dataset = create_dataset() >>> cb = EarlyStopping(monitor="acc", patience=3, verbose=True) >>> model.fit(10, dataset, callbacks=cb) diff --git a/mindspore/python/mindspore/train/callback/_landscape.py b/mindspore/python/mindspore/train/callback/_landscape.py index 7eebd92eba4..3d1cf8f5f51 100644 --- a/mindspore/python/mindspore/train/callback/_landscape.py +++ b/mindspore/python/mindspore/train/callback/_landscape.py @@ -186,10 +186,10 @@ class SummaryLandscape: ... # If the device_target is Ascend, set the device_target to "Ascend" ... ms.set_context(mode=ms.GRAPH_MODE, device_target="GPU") ... # Create the dataset taking MNIST as an example. Refer to - ... # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/mnist.py + ... # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/mnist.py ... ds_train = create_dataset() ... # Define the network structure of LeNet5. Refer to - ... # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + ... # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py ... network = LeNet5() ... net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean") ... net_opt = nn.Momentum(network.trainable_params(), 0.01, 0.9) @@ -209,13 +209,13 @@ class SummaryLandscape: ... # Simple usage for visualization landscape: ... def callback_fn(): ... # Define the network structure of LeNet5. Refer to - ... # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + ... # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py ... network = LeNet5() ... net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean") ... metrics = {"Loss": Loss()} ... model = Model(network, net_loss, metrics=metrics) ... # Create the dataset taking MNIST as an example. Refer to - ... # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/mnist.py + ... # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/mnist.py ... ds_eval = create_dataset() ... return model, network, ds_eval, metrics ... diff --git a/mindspore/python/mindspore/train/callback/_loss_monitor.py b/mindspore/python/mindspore/train/callback/_loss_monitor.py index db0f7f57eea..f8d0a207096 100644 --- a/mindspore/python/mindspore/train/callback/_loss_monitor.py +++ b/mindspore/python/mindspore/train/callback/_loss_monitor.py @@ -43,13 +43,13 @@ class LossMonitor(Callback): >>> from mindspore.train import Model, LossMonitor >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean') >>> optim = nn.Momentum(net.trainable_params(), 0.01, 0.9) >>> model = Model(net, loss_fn=loss, optimizer=optim) >>> # Create the dataset taking MNIST as an example. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/mnist.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/mnist.py >>> dataset = create_dataset() >>> loss_monitor = LossMonitor() >>> model.train(10, dataset, callbacks=loss_monitor) diff --git a/mindspore/python/mindspore/train/callback/_on_request_exit.py b/mindspore/python/mindspore/train/callback/_on_request_exit.py index 769a9a61d51..97066b52f23 100644 --- a/mindspore/python/mindspore/train/callback/_on_request_exit.py +++ b/mindspore/python/mindspore/train/callback/_on_request_exit.py @@ -55,13 +55,13 @@ class OnRequestExit(Callback): >>> import mindspore as ms >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean') >>> optim = nn.Momentum(net.trainable_params(), 0.01, 0.9) >>> model = Model(net, loss_fn=loss, optimizer=optim) >>> # Create the dataset taking MNIST as an example. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/mnist.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/mnist.py >>> dataset = create_dataset() >>> on_request_exit = ms.train.OnRequestExit(file_name='LeNet5') >>> model.train(10, dataset, callbacks=on_request_exit) diff --git a/mindspore/python/mindspore/train/callback/_reduce_lr_on_plateau.py b/mindspore/python/mindspore/train/callback/_reduce_lr_on_plateau.py index 0d2f63542ad..d3c71964304 100644 --- a/mindspore/python/mindspore/train/callback/_reduce_lr_on_plateau.py +++ b/mindspore/python/mindspore/train/callback/_reduce_lr_on_plateau.py @@ -84,13 +84,13 @@ class ReduceLROnPlateau(Callback): >>> from mindspore import nn >>> from mindspore.train import Model, ReduceLROnPlateau >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean') >>> optim = nn.Momentum(net.trainable_params(), 0.01, 0.9) >>> model = Model(net, loss_fn=loss, optimizer=optim, metrics={"acc"}) >>> # Create the dataset taking MNIST as an example. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/mnist.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/mnist.py >>> dataset = create_dataset() >>> cb = ReduceLROnPlateau(monitor="acc", patience=3, verbose=True) >>> model.fit(10, dataset, callbacks=cb) diff --git a/mindspore/python/mindspore/train/callback/_summary_collector.py b/mindspore/python/mindspore/train/callback/_summary_collector.py index e6f21ba4b72..46b06d28936 100644 --- a/mindspore/python/mindspore/train/callback/_summary_collector.py +++ b/mindspore/python/mindspore/train/callback/_summary_collector.py @@ -190,10 +190,10 @@ class SummaryCollector(Callback): ... ms.set_context(mode=ms.GRAPH_MODE, device_target="Ascend") ... mnist_dataset_dir = '/path/to/mnist_dataset_directory' ... # Create the dataset taking MNIST as an example. Refer to - ... # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/mnist.py + ... # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/mnist.py ... ds_train = create_dataset() ... # Define the network structure of LeNet5. Refer to - ... # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + ... # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py ... network = LeNet5(10) ... net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean") ... net_opt = nn.Momentum(network.trainable_params(), 0.01, 0.9) diff --git a/mindspore/python/mindspore/train/callback/_time_monitor.py b/mindspore/python/mindspore/train/callback/_time_monitor.py index b28051f9499..fe8656eba3f 100644 --- a/mindspore/python/mindspore/train/callback/_time_monitor.py +++ b/mindspore/python/mindspore/train/callback/_time_monitor.py @@ -43,13 +43,13 @@ class TimeMonitor(Callback): >>> from mindspore.train import Model, TimeMonitor >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean') >>> optim = nn.Momentum(net.trainable_params(), 0.01, 0.9) >>> model = Model(net, loss_fn=loss, optimizer=optim) >>> # Create the dataset taking MNIST as an example. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/mnist.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/mnist.py >>> dataset = create_dataset() >>> time_monitor = TimeMonitor() >>> model.train(10, dataset, callbacks=time_monitor) diff --git a/mindspore/python/mindspore/train/dataset_helper.py b/mindspore/python/mindspore/train/dataset_helper.py index 23aae62a519..6d6eac65a7b 100644 --- a/mindspore/python/mindspore/train/dataset_helper.py +++ b/mindspore/python/mindspore/train/dataset_helper.py @@ -191,7 +191,7 @@ def _get_dataset_aux(dataset): def connect_network_with_dataset(network, dataset_helper): """ Connect the `network` with dataset in `dataset_helper`. Only supported in `sink mode - `_, (dataset_sink_mode=True). + `_, (dataset_sink_mode=True). Args: network (Cell): The training network for dataset. diff --git a/mindspore/python/mindspore/train/loss_scale_manager.py b/mindspore/python/mindspore/train/loss_scale_manager.py index 7e5699e8ab3..db317e89153 100644 --- a/mindspore/python/mindspore/train/loss_scale_manager.py +++ b/mindspore/python/mindspore/train/loss_scale_manager.py @@ -62,7 +62,7 @@ class FixedLossScaleManager(LossScaleManager): >>> from mindspore import amp, nn >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss_scale = 1024.0 >>> loss_scale_manager = amp.FixedLossScaleManager(loss_scale, False) @@ -136,7 +136,7 @@ class DynamicLossScaleManager(LossScaleManager): >>> from mindspore import amp, nn >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss_scale_manager = amp.DynamicLossScaleManager() >>> optim = nn.Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9) diff --git a/mindspore/python/mindspore/train/metrics/metric.py b/mindspore/python/mindspore/train/metrics/metric.py index 07dcfa95927..463fd873a62 100644 --- a/mindspore/python/mindspore/train/metrics/metric.py +++ b/mindspore/python/mindspore/train/metrics/metric.py @@ -200,7 +200,7 @@ class Metric(metaclass=ABCMeta): Tutorial Examples: - `Evaluation Metrics - Customized Metrics - `_ + `_ """ raise NotImplementedError('Must define clear function to use this base class') @@ -214,7 +214,7 @@ class Metric(metaclass=ABCMeta): Tutorial Examples: - `Evaluation Metrics - Customized Metrics - `_ + `_ """ raise NotImplementedError('Must define eval function to use this base class') @@ -231,7 +231,7 @@ class Metric(metaclass=ABCMeta): Tutorial Examples: - `Evaluation Metrics - Customized Metrics - `_ + `_ """ raise NotImplementedError('Must define update function to use this base class') diff --git a/mindspore/python/mindspore/train/model.py b/mindspore/python/mindspore/train/model.py index d4f3d044321..38c1cd150e5 100644 --- a/mindspore/python/mindspore/train/model.py +++ b/mindspore/python/mindspore/train/model.py @@ -190,7 +190,7 @@ class Model: >>> from mindspore.train import Model >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True) >>> optim = nn.Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9) @@ -199,7 +199,7 @@ class Model: >>> model.predict_network >>> model.eval_network >>> # Create the dataset taking MNIST as an example. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/mnist.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/mnist.py >>> dataset = create_dataset() >>> model.train(2, dataset) """ @@ -1022,10 +1022,10 @@ class Model: >>> from mindspore.train import Model >>> >>> # Create the dataset taking MNIST as an example. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/mnist.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/mnist.py >>> dataset = create_dataset() >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True) >>> loss_scale_manager = ms.FixedLossScaleManager(1024., False) @@ -1175,11 +1175,11 @@ class Model: >>> from mindspore.train import Model >>> >>> # Create the dataset taking MNIST as an example. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/mnist.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/mnist.py >>> train_dataset = create_dataset("train") >>> valid_dataset = create_dataset("test") >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True) >>> optim = nn.Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9) @@ -1188,7 +1188,7 @@ class Model: Tutorial Examples: - `Advanced Encapsulation: Model - Train and Save Model - `_ + `_ """ device_target = context.get_context("device_target") if _is_ps_mode() and not _cache_enable() and (device_target in ["Ascend", "CPU"]) and dataset_sink_mode: @@ -1268,10 +1268,10 @@ class Model: >>> from mindspore.amp import FixedLossScaleManager >>> >>> # Create the dataset taking MNIST as an example. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/mnist.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/mnist.py >>> dataset = create_dataset() >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss = nn.SoftmaxCrossEntropyWithLogits() >>> loss_scale_manager = FixedLossScaleManager() @@ -1444,10 +1444,10 @@ class Model: >>> from mindspore.train import Model >>> >>> # Create the dataset taking MNIST as an example. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/mnist.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/mnist.py >>> dataset = create_dataset() >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True) >>> model = Model(net, loss_fn=loss, optimizer=None, metrics={'acc'}) @@ -1455,7 +1455,7 @@ class Model: Tutorial Examples: - `Advanced Encapsulation: Model - Train and Save Model - `_ + `_ """ valid_dataset = self._prepare_obf_dataset(valid_dataset) dataset_sink_mode = Validator.check_bool(dataset_sink_mode) @@ -1701,7 +1701,7 @@ class Model: >>> >>> input_data = Tensor(np.random.randint(0, 255, [1, 1, 32, 32]), mindspore.float32) >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> model = Model(LeNet5()) >>> result = model.predict(input_data) """ @@ -1809,10 +1809,10 @@ class Model: >>> ms.set_auto_parallel_context(parallel_mode=ms.ParallelMode.SEMI_AUTO_PARALLEL) >>> >>> # Create the dataset taking MNIST as an example. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/mnist.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/mnist.py >>> dataset = create_dataset() >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss = nn.SoftmaxCrossEntropyWithLogits() >>> loss_scale_manager = ms.FixedLossScaleManager() diff --git a/mindspore/python/mindspore/train/serialization.py b/mindspore/python/mindspore/train/serialization.py index efb0e923fe7..f837b36ffcd 100644 --- a/mindspore/python/mindspore/train/serialization.py +++ b/mindspore/python/mindspore/train/serialization.py @@ -437,7 +437,7 @@ def save_checkpoint(save_obj, ckpt_file_name, integrated_save=True, >>> import mindspore as ms >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> ms.save_checkpoint(net, "./lenet.ckpt", ... choice_func=lambda x: x.startswith("conv") and not x.startswith("conv1")) @@ -457,7 +457,7 @@ def save_checkpoint(save_obj, ckpt_file_name, integrated_save=True, Tutorial Examples: - `Saving and Loading the Model - Saving and Loading the Model Weight - `_ + `_ """ ckpt_file_name = _check_save_obj_and_ckpt_file_name(save_obj, ckpt_file_name) integrated_save = Validator.check_bool(integrated_save) @@ -713,7 +713,7 @@ def load(file_name, **kwargs): - obf_func (function): A python function used for loading obfuscated MindIR model, which can refer to `obfuscate_model() - `_. + `_. Returns: GraphCell, a compiled graph that can executed by `GraphCell`. @@ -743,7 +743,7 @@ def load(file_name, **kwargs): Tutorial Examples: - `Saving and Loading the Model - Saving and Loading MindIR - `_ + `_ """ if not isinstance(file_name, str): raise ValueError("For 'load', the argument 'file_name' must be string, but " @@ -950,7 +950,7 @@ def obfuscate_model(obf_config, **kwargs): >>> import mindspore.nn as nn >>> import numpy as np >>> # Download ori_net.mindir - >>> # https://gitee.com/mindspore/mindspore/blob/r2.3.q1/tests/ut/python/mindir/ori_net.mindir + >>> # https://gitee.com/mindspore/mindspore/blob/master/tests/ut/python/mindir/ori_net.mindir >>> input1 = ms.Tensor(np.ones((1, 1, 32, 32)).astype(np.float32)) >>> obf_config = {'original_model_path': "./net.mindir", ... 'save_model_path': "./obf_net", @@ -1089,7 +1089,7 @@ def load_checkpoint(ckpt_file_name, net=None, strict_load=False, filter_prefix=N Tutorial Examples: - `Saving and Loading the Model - Saving and Loading the Model Weight - `_ + `_ """ ckpt_file_name = _check_ckpt_file_name(ckpt_file_name) specify_prefix = _check_prefix(specify_prefix) @@ -1225,10 +1225,10 @@ def load_checkpoint_async(ckpt_file_name, net=None, strict_load=False, filter_pr >>> from mindspore import load_param_into_net >>> context.set_context(mode=context.GRAPH_MODE, device_target="Ascend") >>> # Create the dataset taking MNIST as an example. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/mnist.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/mnist.py >>> dataset = create_dataset() >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> ckpt_file = "./checkpoint/LeNet5-1_32.ckpt" >>> net = LeNet5() >>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean") @@ -1395,7 +1395,7 @@ def load_param_into_net(net, parameter_dict, strict_load=False): >>> import mindspore as ms >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> ckpt_file_name = "./checkpoint/LeNet5-1_32.ckpt" >>> param_dict = ms.load_checkpoint(ckpt_file_name, filter_prefix="conv1") @@ -1405,7 +1405,7 @@ def load_param_into_net(net, parameter_dict, strict_load=False): Tutorial Examples: - `Saving and Loading the Model - Saving and Loading the Model Weight - `_ + `_ """ if not isinstance(net, nn.Cell): logger.critical("Failed to combine the net and the parameters.") @@ -1729,7 +1729,7 @@ def export(net, *inputs, file_name, file_format, **kwargs): >>> from mindspore import Tensor >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> input_tensor = Tensor(np.ones([1, 1, 32, 32]).astype(np.float32)) >>> ms.export(net, input_tensor, file_name='lenet', file_format='MINDIR') @@ -1746,7 +1746,7 @@ def export(net, *inputs, file_name, file_format, **kwargs): Tutorial Examples: - `Saving and Loading the Model - Saving and Loading MindIR - `_ + `_ """ old_ms_jit_value = context.get_context("jit_syntax_level") context.set_context(jit_syntax_level=mindspore.STRICT) @@ -1809,7 +1809,7 @@ def _get_funcgraph(net, *inputs): >>> from mindspore import Tensor >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> input_tensor = Tensor(np.ones([1, 1, 32, 32]).astype(np.float32)) >>> ms.get_funcgraph(net, input_tensor) @@ -2574,14 +2574,14 @@ def load_distributed_checkpoint(network, checkpoint_filenames, predict_strategy= For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the `rank table startup - `_ + `_ for more details. For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun startup - `_ . + `_ . For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster - Startup `_ . + Startup `_ . >>> import os >>> import numpy as np diff --git a/mindspore/python/mindspore/train/summary/summary_record.py b/mindspore/python/mindspore/train/summary/summary_record.py index bfc6ce27363..1b32e0ce34f 100644 --- a/mindspore/python/mindspore/train/summary/summary_record.py +++ b/mindspore/python/mindspore/train/summary/summary_record.py @@ -281,25 +281,25 @@ class SummaryRecord: LossLandscape]): The value to store. - The data type of value should be 'GraphProto' (see `mindspore/ccsrc/anf_ir.proto - `_) object + `_) object when the plugin is 'graph'. - The data type of value should be 'Tensor' object when the plugin is 'scalar', 'image', 'tensor' or 'histogram'. - The data type of value should be a 'TrainLineage' object when the plugin is 'train_lineage', see `mindspore/ccsrc/lineage.proto - `_. + `_. - The data type of value should be a 'EvaluationLineage' object when the plugin is 'eval_lineage', see `mindspore/ccsrc/lineage.proto - `_. + `_. - The data type of value should be a 'DatasetGraph' object when the plugin is 'dataset_graph', see `mindspore/ccsrc/lineage.proto - `_. + `_. - The data type of value should be a 'UserDefinedInfo' object when the plugin is 'custom_lineage_data', see `mindspore/ccsrc/lineage.proto - `_. + `_. - The data type of value should be a 'LossLandscape' object when the plugin is 'LANDSCAPE', see `mindspore/ccsrc/summary.proto - `_. + `_. Raises: ValueError: `plugin` is not in the optional value. @@ -394,7 +394,7 @@ class SummaryRecord: Raises: TypeError: `step` is not int, or `train_network` is not `mindspore.nn.Cell - `_ . + `_ . Examples: >>> import mindspore as ms diff --git a/mindspore/python/mindspore/train/train_thor/convert_utils.py b/mindspore/python/mindspore/train/train_thor/convert_utils.py index 2f6229131d9..f8566b6841a 100644 --- a/mindspore/python/mindspore/train/train_thor/convert_utils.py +++ b/mindspore/python/mindspore/train/train_thor/convert_utils.py @@ -175,7 +175,7 @@ class ConvertNetUtils: >>> from mindspore.nn import thor >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> temp = Tensor([4e-4, 1e-4, 1e-5, 1e-5], ms.float32) >>> opt = thor(net, learning_rate=temp, damping=temp, momentum=0.9, loss_scale=128, frequency=4) @@ -239,10 +239,10 @@ class ConvertModelUtils: >>> from mindspore.nn import thor >>> >>> # Define the network structure of LeNet5. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/lenet.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> # Create the dataset taking MNIST as an example. Refer to - >>> # https://gitee.com/mindspore/docs/blob/r2.3.q1/docs/mindspore/code/mnist.py + >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/mnist.py >>> dataset = create_dataset() >>> temp = Tensor([4e-4, 1e-4, 1e-5, 1e-5], ms.float32) >>> opt = thor(net, learning_rate=temp, damping=temp, momentum=0.9, loss_scale=128, frequency=4)