!29041 fix doc issues

Merge pull request !29041 from luoyang/code_docs_chinese
This commit is contained in:
i-robot 2022-01-13 13:04:46 +00:00 committed by Gitee
commit c1a29bb61e
No known key found for this signature in database
GPG Key ID: 173E9B9CA92EEF8F
3 changed files with 16 additions and 6 deletions

View File

@ -267,6 +267,7 @@ class SpeechCommandsDataset(MappableDataset):
Citation:
.. code-block::
@article{2018Speech,
title={Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition},
author={Warden, P.},
@ -306,7 +307,7 @@ class TedliumDataset(MappableDataset):
dataset_dir (str): Path to the root directory that contains the dataset.
release (str): Release of the dataset, can be "release1", "release2", "release3".
usage (str, optional): Usage of this dataset.
For release1 or release2, can be `train`, `test`, ` dev` or `all`.
For release1 or release2, can be `train`, `test`, `dev` or `all`.
`train` will read from train samples,
`test` will read from test samples,
`dev` will read from dev samples,

View File

@ -133,7 +133,7 @@ class AmazonReviewDataset(SourceDataset):
For Polarity dataset, `train` will read from 3,600,000 train samples,
`test` will read from 400,000 test samples,
`all` will read from all 4,000,000 samples.
For Full dataset, `train` will read from 3,000,000 train samples,
For Full dataset, `train` will read from 3,000,000 train samples,
`test` will read from 650,000 test samples,
`all` will read from all 3,650,000 samples (default=None, all samples).
num_samples (int, optional): Number of samples (rows) to be read (default=None, reads the full dataset).
@ -146,6 +146,7 @@ class AmazonReviewDataset(SourceDataset):
- Shuffle.GLOBAL: Shuffle both the files and samples.
- Shuffle.FILES: Shuffle files only.
num_shards (int, optional): Number of shards that the dataset will be divided into (default=None).
When this argument is specified, `num_samples` reflects the max sample number of per shard.
shard_id (int, optional): The shard ID within num_shards (default=None). This
@ -1060,6 +1061,7 @@ class PennTreebankDataset(SourceDataset, TextBaseDataset):
You can unzip the dataset files into this directory structure and read by MindSpore's API.
.. code-block::
.
PennTreebank_dataset_dir
ptb.test.txt
@ -1101,7 +1103,7 @@ class PennTreebankDataset(SourceDataset, TextBaseDataset):
class SogouNewsDataset(SourceDataset):
"""
r"""
A source dataset that reads and parses Sogou News dataset.
The generated dataset has three columns: :py:obj:`[index, title, content]`.
@ -1307,7 +1309,7 @@ class WikiTextDataset(SourceDataset):
Args:
dataset_dir (str): Path to the root directory that contains the dataset.
usage (str, optional): Acceptable usages include `train`, `test`, 'valid' and `all`(default=None, all samples).
usage (str, optional): Acceptable usages include `train`, `test`, 'valid' and `all` (default=None, all samples).
num_samples (int, optional): Number of samples (rows) to read (default=None, reads the full dataset).
num_parallel_workers (int, optional): Number of workers to read the data
(default=None, number set in the config).

View File

@ -2049,6 +2049,7 @@ class Flowers102Dataset(GeneratorDataset):
You can unzip the dataset files into this directory structure and read by MindSpore's API.
.. code-block::
.
flowes102_dataset_dir
imagelabels.mat
@ -2636,7 +2637,7 @@ class PhotoTourDataset(MappableDataset):
usage (str, optional): Usage of the dataset, can be `train` or `test` (Default=None, will be set to 'train').
When usage is `train`, number of samples for each `name` is
{'notredame': 468159, 'yosemite': 633587, 'liberty': 450092, 'liberty_harris': 379587,
'yosemite_harris': 450912, 'notredame_harris': 325295}.
'yosemite_harris': 450912, 'notredame_harris': 325295}.
When usage is `test`, will read 100,000 samples for testing.
num_samples (int, optional): The number of images to be included in the dataset
(default=None, will read all images).
@ -2721,6 +2722,7 @@ class PhotoTourDataset(MappableDataset):
You can unzip the original PhotoTour dataset files into this directory structure and read by MindSpore's API.
.. code-block::
.
photo_tour_dataset_directory
liberty/
@ -2864,8 +2866,9 @@ class Places365Dataset(MappableDataset):
You can unzip the original Places365 dataset files into this directory structure and read by MindSpore's API.
.. code-block::
.
categories_places365.txt
categories_places365
places365_train-standard.txt
places365_train-challenge.txt
val_large/
@ -3157,6 +3160,7 @@ class SBDataset(GeneratorDataset):
(default=None, number set in the config).
shuffle (bool, optional): Whether to perform shuffle on the dataset (default=None, expected
order behavior shown in the table).
decode (bool, optional): Decode the images after reading (default=None).
sampler (Sampler, optional): Object used to choose samples from the
dataset (default=None, expected order behavior shown in the table).
num_shards (int, optional): Number of shards that the dataset will be divided
@ -3599,6 +3603,7 @@ class STL10Dataset(MappableDataset):
You can unzip the dataset files into this directory structure and read by MindSpore's API.
.. code-block::
.
stl10_dataset_dir
train_X.bin
@ -3752,6 +3757,7 @@ class SVHNDataset(GeneratorDataset):
You can unzip the dataset files into this directory structure and read by MindSpore's API.
.. code-block::
.
svhn_dataset_dir
train_32x32.mat
@ -3847,6 +3853,7 @@ class USPSDataset(SourceDataset):
You can download and unzip the dataset files into this directory structure and read by MindSpore's API.
.. code-block::
.
usps_dataset_dir
usps