From ac627e046bed97092006a3b5c517507ad3938f9d Mon Sep 17 00:00:00 2001 From: Ting Wang Date: Sat, 19 Sep 2020 17:36:35 +0800 Subject: [PATCH] update links for README Signed-off-by: Ting Wang --- README.md | 4 +- README_CN.md | 4 +- mindspore/lite/README.md | 6 +-- mindspore/lite/README_CN.md | 6 +-- mindspore/nn/probability/README.md | 6 +-- model_zoo/official/cv/alexnet/README.md | 4 +- model_zoo/official/cv/deeplabv3/README.md | 4 +- model_zoo/official/cv/densenet121/README.md | 6 +-- model_zoo/official/cv/googlenet/README.md | 8 ++-- model_zoo/official/cv/inceptionv3/README.md | 8 ++-- model_zoo/official/cv/lenet/README.md | 4 +- model_zoo/official/cv/lenet_quant/Readme.md | 4 +- model_zoo/official/cv/maskrcnn/README.md | 6 +-- model_zoo/official/cv/mobilenetv2/README.md | 6 +-- .../official/cv/mobilenetv2_quant/Readme.md | 6 +-- model_zoo/official/cv/mobilenetv3/Readme.md | 4 +- model_zoo/official/cv/psenet/README.md | 8 ++-- model_zoo/official/cv/resnet/README.md | 6 +-- .../official/cv/resnet50_quant/Readme.md | 6 +-- model_zoo/official/cv/resnet_thor/README.md | 4 +- model_zoo/official/cv/resnext50/README.md | 6 +-- model_zoo/official/cv/shufflenetv2/Readme.md | 4 +- model_zoo/official/cv/ssd/README.md | 2 +- model_zoo/official/cv/unet/README.md | 6 +-- model_zoo/official/cv/vgg16/README.md | 8 ++-- model_zoo/official/cv/warpctc/README.md | 6 +-- .../official/cv/yolov3_darknet53/README.md | 4 +- .../cv/yolov3_darknet53_quant/README.md | 4 +- .../official/cv/yolov3_resnet18/README.md | 8 ++-- model_zoo/official/gnn/bgcf/README.md | 4 +- model_zoo/official/gnn/gat/README.md | 4 +- model_zoo/official/gnn/gcn/README.md | 4 +- .../lite/image_classification/README.en.md | 4 +- .../lite/image_classification/README.md | 2 +- .../official/lite/object_detection/README.md | 4 +- model_zoo/official/nlp/bert/README.md | 6 +-- model_zoo/official/nlp/bert_thor/README.md | 4 +- model_zoo/official/nlp/lstm/README.md | 4 +- model_zoo/official/nlp/mass/README.md | 46 ++++++++++--------- model_zoo/official/nlp/tinybert/README.md | 6 +-- model_zoo/official/nlp/transformer/README.md | 6 +-- model_zoo/official/recommend/deepfm/README.md | 4 +- .../recommend/wide_and_deep/README.md | 4 +- .../wide_and_deep_multitable/README.md | 4 +- model_zoo/research/cv/ghostnet/Readme.md | 4 +- .../cv/resnet50_adv_pruning/Readme.md | 4 +- model_zoo/research/cv/ssd_ghostnet/README.md | 6 +-- 47 files changed, 140 insertions(+), 138 deletions(-) diff --git a/README.md b/README.md index 0e60dc00eec..c325101fa88 100644 --- a/README.md +++ b/README.md @@ -31,7 +31,7 @@ enrichment of the AI software/hardware application ecosystem. MindSpore Architecture -For more details please check out our [Architecture Guide](https://www.mindspore.cn/docs/en/master/architecture.html). +For more details please check out our [Architecture Guide](https://www.mindspore.cn/doc/note/en/master/design/mindspore/architecture.html). ### Automatic Differentiation @@ -206,7 +206,7 @@ please check out [docker](docker/README.md) repo for the details. ## Quickstart -See the [Quick Start](https://www.mindspore.cn/tutorial/en/master/quick_start/quick_start.html) +See the [Quick Start](https://www.mindspore.cn/tutorial/training/en/master/quick_start/quick_start.html) to implement the image classification. ## Docs diff --git a/README_CN.md b/README_CN.md index 690d9243763..49382dacf00 100644 --- a/README_CN.md +++ b/README_CN.md @@ -28,7 +28,7 @@ MindSpore提供了友好的设计和高效的执行,旨在提升数据科学 MindSpore Architecture -欲了解更多详情,请查看我们的[总体架构](https://www.mindspore.cn/docs/zh-CN/master/architecture.html)。 +欲了解更多详情,请查看我们的[总体架构](https://www.mindspore.cn/doc/note/zh-CN/master/design/mindspore/architecture.html)。 ### 自动微分 @@ -201,7 +201,7 @@ MindSpore的Docker镜像托管在[Docker Hub](https://hub.docker.com/r/mindspore ## 快速入门 -参考[快速入门](https://www.mindspore.cn/tutorial/zh-CN/master/quick_start/quick_start.html)实现图片分类。 +参考[快速入门](https://www.mindspore.cn/tutorial/training/zh-CN/master/quick_start/quick_start.html)实现图片分类。 ## 文档 diff --git a/mindspore/lite/README.md b/mindspore/lite/README.md index cacebb6a9f1..ed08052856f 100644 --- a/mindspore/lite/README.md +++ b/mindspore/lite/README.md @@ -6,7 +6,7 @@ MindSpore lite is a high-performance, lightweight open source reasoning framewor MindSpore Lite Architecture -For more details please check out our [MindSpore Lite Architecture Guide](https://www.mindspore.cn/lite/docs/en/master/architecture.html). +For more details please check out our [MindSpore Lite Architecture Guide](https://www.mindspore.cn/doc/note/en/master/design/mindspore/architecture_lite.html). ### MindSpore Lite features @@ -41,7 +41,7 @@ For more details please check out our [MindSpore Lite Architecture Guide](https: 2. Model converter and optimization - If you use MindSpore or a third-party model, you need to use [MindSpore Lite Model Converter Tool](https://www.mindspore.cn/lite/tutorial/en/master/use/converter_tool.html) to convert the model into MindSpore Lite model. The MindSpore Lite model converter tool provides the converter of TensorFlow Lite, Caffe, ONNX to MindSpore Lite model, fusion and quantization could be introduced during convert procedure. + If you use MindSpore or a third-party model, you need to use [MindSpore Lite Model Converter Tool](https://www.mindspore.cn/tutorial/lite/en/master/use/convert_model.html) to convert the model into MindSpore Lite model. The MindSpore Lite model converter tool provides the converter of TensorFlow Lite, Caffe, ONNX to MindSpore Lite model, fusion and quantization could be introduced during convert procedure. MindSpore also provides a tool to convert models running on IoT devices . @@ -51,7 +51,7 @@ For more details please check out our [MindSpore Lite Architecture Guide](https: 4. Inference - Load the model and perform inference. [Inference](https://www.mindspore.cn/lite/tutorial/en/master/use/runtime.html) is the process of running input data through the model to get output. + Load the model and perform inference. [Inference](https://www.mindspore.cn/tutorial/lite/en/master/use/runtime.html) is the process of running input data through the model to get output. MindSpore provides pre-trained model that can be deployed on mobile device [example](https://www.mindspore.cn/lite/examples/en). diff --git a/mindspore/lite/README_CN.md b/mindspore/lite/README_CN.md index 7e9d2db64cc..ff3abadb6ec 100644 --- a/mindspore/lite/README_CN.md +++ b/mindspore/lite/README_CN.md @@ -9,7 +9,7 @@ MindSpore Lite是MindSpore推出的端云协同的、轻量化、高性能AI推 MindSpore Lite Architecture -欲了解更多详情,请查看我们的[MindSpore Lite 总体架构](https://www.mindspore.cn/lite/docs/zh-CN/master/architecture.html)。 +欲了解更多详情,请查看我们的[MindSpore Lite 总体架构](https://www.mindspore.cn/lite/doc/note/zh-CN/master/design/mindspore/architecture_lite.html)。 ## MindSpore Lite技术特点 @@ -49,7 +49,7 @@ MindSpore Lite是MindSpore推出的端云协同的、轻量化、高性能AI推 2. 模型转换/优化 - 如果您使用MindSpore或第三方训练的模型,需要使用[MindSpore Lite模型转换工具](https://www.mindspore.cn/lite/tutorial/zh-CN/master/use/converter_tool.html)转换成MindSpore Lite模型格式。MindSpore Lite模型转换工具不仅提供了将TensorFlow Lite、Caffe、ONNX等模型格式转换为MindSpore Lite模型格式,还提供了算子融合、量化等功能。 + 如果您使用MindSpore或第三方训练的模型,需要使用[MindSpore Lite模型转换工具](https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/convert_model.html)转换成MindSpore Lite模型格式。MindSpore Lite模型转换工具不仅提供了将TensorFlow Lite、Caffe、ONNX等模型格式转换为MindSpore Lite模型格式,还提供了算子融合、量化等功能。 MindSpore还提供了将IoT设备上运行的模型转换成.C代码的生成工具。 @@ -61,7 +61,7 @@ MindSpore Lite是MindSpore推出的端云协同的、轻量化、高性能AI推 4. 模型推理 - 主要完成模型推理工作,即加载模型,完成模型相关的所有计算。[推理](https://www.mindspore.cn/lite/tutorial/zh-CN/master/use/runtime.html)是通过模型运行输入数据,获取预测的过程。 + 主要完成模型推理工作,即加载模型,完成模型相关的所有计算。[推理](https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/runtime.html)是通过模型运行输入数据,获取预测的过程。 MindSpore提供了预训练模型部署在智能终端的[样例](https://www.mindspore.cn/lite/examples)。 diff --git a/mindspore/nn/probability/README.md b/mindspore/nn/probability/README.md index ccc9acaa91c..68a5e42f2b5 100644 --- a/mindspore/nn/probability/README.md +++ b/mindspore/nn/probability/README.md @@ -41,7 +41,7 @@ MDP requires MindSpore version 0.7.0-beta or later. MDP is actively evolving. In ### Tutorial **Bayesian Neural Network** -1. Process the required dataset. The MNIST dateset is used in the example. Data processing is consistent with [Implementing an Image Classification Application](https://www.mindspore.cn/tutorial/en/master/quick_start/quick_start.html) in Tutorial. +1. Process the required dataset. The MNIST dateset is used in the example. Data processing is consistent with [Implementing an Image Classification Application](https://www.mindspore.cn/tutorial/training/en/master/quick_start/quick_start.html) in Tutorial. 2. Define a Bayesian Neural Network. The bayesian LeNet is used in this example. @@ -220,7 +220,7 @@ net_loss = ELBO(latent_prior='Normal', output_prior='Normal') optimizer = nn.Adam(params=vae.trainable_params(), learning_rate=0.001) net_with_loss = nn.WithLossCell(vae, net_loss) ``` -3. Process the required dataset. The MNIST dateset is used in the example. Data processing is consistent with [Implementing an Image Classification Application](https://www.mindspore.cn/tutorial/en/master/quick_start/quick_start.html) in Tutorial. +3. Process the required dataset. The MNIST dateset is used in the example. Data processing is consistent with [Implementing an Image Classification Application](https://www.mindspore.cn/tutorial/training/en/master/quick_start/quick_start.html) in Tutorial. 4. Use SVI interface to train VAE network. vi.run can return the trained network, get_train_loss can get the loss after training. ``` @@ -423,7 +423,7 @@ if __name__ == "__main__": **Uncertainty Evaluation** The uncertainty estimation toolbox is based on MindSpore Deep Probabilistic Programming (MDP), and it is suitable for mainstream deep learning models, such as regression, classification, target detection and so on. In the inference stage, with the uncertainy estimation toolbox, developers only need to pass in the trained model and training dataset, specify the task and the samples to be estimated, then can obtain the aleatoric uncertainty and epistemic uncertainty. Based the uncertainty information, developers can understand the model and the dataset better. -In classification task, for example, the model is lenet model. The MNIST dateset is used in the example. Data processing is consistent with [Implementing an Image Classification Application](https://www.mindspore.cn/tutorial/en/master/quick_start/quick_start.html) in Tutorial. For evaluating the uncertainty of test examples, the use of the toolbox is as follows: +In classification task, for example, the model is lenet model. The MNIST dateset is used in the example. Data processing is consistent with [Implementing an Image Classification Application](https://www.mindspore.cn/tutorial/training/en/master/quick_start/quick_start.html) in Tutorial. For evaluating the uncertainty of test examples, the use of the toolbox is as follows: ``` from mindspore.nn.probability.toolbox.uncertainty_evaluation import UncertaintyEvaluation from mindspore.train.serialization import load_checkpoint, load_param_into_net diff --git a/model_zoo/official/cv/alexnet/README.md b/model_zoo/official/cv/alexnet/README.md index 0686dba4a2e..e9c8007c0fd 100644 --- a/model_zoo/official/cv/alexnet/README.md +++ b/model_zoo/official/cv/alexnet/README.md @@ -52,8 +52,8 @@ Dataset used: [CIFAR-10]() - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Quick Start](#contents) diff --git a/model_zoo/official/cv/deeplabv3/README.md b/model_zoo/official/cv/deeplabv3/README.md index 1cf1889c253..f28c1994ec8 100644 --- a/model_zoo/official/cv/deeplabv3/README.md +++ b/model_zoo/official/cv/deeplabv3/README.md @@ -67,8 +67,8 @@ Before running code of this project,please ensure you have the following envir For more information about how to get started with MindSpore, see the following sections: - - [MindSpore's Tutorial](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore's Api](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) ## Quick Start Guide diff --git a/model_zoo/official/cv/densenet121/README.md b/model_zoo/official/cv/densenet121/README.md index ebaf8afd121..6db7b4c5e17 100644 --- a/model_zoo/official/cv/densenet121/README.md +++ b/model_zoo/official/cv/densenet121/README.md @@ -57,7 +57,7 @@ The default configuration of the Dataset are as follows: ## Mixed Precision -The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’. @@ -69,8 +69,8 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) diff --git a/model_zoo/official/cv/googlenet/README.md b/model_zoo/official/cv/googlenet/README.md index 014dc782113..422ebd676af 100644 --- a/model_zoo/official/cv/googlenet/README.md +++ b/model_zoo/official/cv/googlenet/README.md @@ -55,7 +55,7 @@ Dataset used: [CIFAR-10]() ## Mixed Precision -The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’. @@ -67,8 +67,8 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) @@ -334,7 +334,7 @@ Parameters for both training and evaluation can be set in config.py ## [How to use](#contents) ### Inference -If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/network_migration.html). Following the steps below, this is a simple example: +If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/migrate_3rd_scripts.html). Following the steps below, this is a simple example: - Running on Ascend diff --git a/model_zoo/official/cv/inceptionv3/README.md b/model_zoo/official/cv/inceptionv3/README.md index c2aee9e44c9..78d83f95823 100644 --- a/model_zoo/official/cv/inceptionv3/README.md +++ b/model_zoo/official/cv/inceptionv3/README.md @@ -45,7 +45,7 @@ Dataset used can refer to paper. ## [Mixed Precision(Ascend)](#contents) -The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’. @@ -56,8 +56,8 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Script description](#contents) @@ -132,7 +132,7 @@ sh run_distribute_train.sh RANK_TABLE_FILE DATA_PATH sh run_standalone_train.sh DEVICE_ID DATA_PATH ``` > Notes: - RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training_ascend.html) , and the device_ip can be got as [Link]https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools. + RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/distributed_training_ascend.html) , and the device_ip can be got as [Link]https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools. > This is processor cores binding operation regarding the `device_num` and total processor numbers. If you are not expect to do it, remove the operations `taskset` in `scripts/run_distribute_train.sh` diff --git a/model_zoo/official/cv/lenet/README.md b/model_zoo/official/cv/lenet/README.md index 16e1a8a06c3..c8c071983aa 100644 --- a/model_zoo/official/cv/lenet/README.md +++ b/model_zoo/official/cv/lenet/README.md @@ -58,8 +58,8 @@ Dataset used: [MNIST]() - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Quick Start](#contents) diff --git a/model_zoo/official/cv/lenet_quant/Readme.md b/model_zoo/official/cv/lenet_quant/Readme.md index b4e4f43b30c..45600dbf2da 100644 --- a/model_zoo/official/cv/lenet_quant/Readme.md +++ b/model_zoo/official/cv/lenet_quant/Readme.md @@ -60,8 +60,8 @@ Dataset used: [MNIST]() - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Quick Start](#contents) diff --git a/model_zoo/official/cv/maskrcnn/README.md b/model_zoo/official/cv/maskrcnn/README.md index 20eb077c50a..4aa20c02b12 100644 --- a/model_zoo/official/cv/maskrcnn/README.md +++ b/model_zoo/official/cv/maskrcnn/README.md @@ -52,8 +52,8 @@ MaskRCNN is a two-stage target detection network. It extends FasterRCNN by addin - Framework - [MindSpore](https://gitee.com/mindspore/mindspore) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/en/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/en/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) - third-party libraries @@ -307,7 +307,7 @@ Usage: sh run_standalone_train.sh [PRETRAINED_MODEL] ## [Training Process](#contents) -- Set options in `config.py`, including loss_scale, learning rate and network hyperparameters. Click [here](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/loading_the_datasets.html#mindspore) for more information about dataset. +- Set options in `config.py`, including loss_scale, learning rate and network hyperparameters. Click [here](https://www.mindspore.cn/tutorial/training/zh-CN/master/use/data_preparation.html) for more information about dataset. ### [Training](#content) - Run `run_standalone_train.sh` for non-distributed training of MaskRCNN model. diff --git a/model_zoo/official/cv/mobilenetv2/README.md b/model_zoo/official/cv/mobilenetv2/README.md index c6efa83754c..12f1d12d0e1 100644 --- a/model_zoo/official/cv/mobilenetv2/README.md +++ b/model_zoo/official/cv/mobilenetv2/README.md @@ -43,7 +43,7 @@ Dataset used: [imagenet](http://www.image-net.org/) ## [Mixed Precision(Ascend)](#contents) -The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’. # [Environment Requirements](#contents) @@ -53,8 +53,8 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Script description](#contents) diff --git a/model_zoo/official/cv/mobilenetv2_quant/Readme.md b/model_zoo/official/cv/mobilenetv2_quant/Readme.md index 55c62900185..6954413dc93 100644 --- a/model_zoo/official/cv/mobilenetv2_quant/Readme.md +++ b/model_zoo/official/cv/mobilenetv2_quant/Readme.md @@ -47,7 +47,7 @@ Dataset used: [imagenet](http://www.image-net.org/) ## [Mixed Precision](#contents) -The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’. # [Environment Requirements](#contents) @@ -57,8 +57,8 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Script description](#contents) diff --git a/model_zoo/official/cv/mobilenetv3/Readme.md b/model_zoo/official/cv/mobilenetv3/Readme.md index fa3099bd0cf..e4fcc867735 100644 --- a/model_zoo/official/cv/mobilenetv3/Readme.md +++ b/model_zoo/official/cv/mobilenetv3/Readme.md @@ -47,8 +47,8 @@ Dataset used: [imagenet](http://www.image-net.org/) - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Script description](#contents) diff --git a/model_zoo/official/cv/psenet/README.md b/model_zoo/official/cv/psenet/README.md index 8455e04e5d2..48713e8a507 100644 --- a/model_zoo/official/cv/psenet/README.md +++ b/model_zoo/official/cv/psenet/README.md @@ -43,10 +43,10 @@ A testing set containing about 2000 readable words - Hardware(Ascend) - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. - Framework - - [MindSpore](http://10.90.67.50/mindspore/archive/20200506/OpenSource/me_vm_x86/) + - [MindSpore](http://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) - install Mindspore - install [pyblind11](https://github.com/pybind/pybind11) - install [Opencv3.4](https://docs.opencv.org/3.4.9/d7/d9f/tutorial_linux_install.html) @@ -193,7 +193,7 @@ Calculated!{"precision": 0.814796668299853, "recall": 0.8006740491092923, "hmean ### Inference -If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/network_migration.html). Following the steps below, this is a simple example: +If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/migrate_3rd_scripts.html). Following the steps below, this is a simple example: ``` # Load unseen dataset for inference diff --git a/model_zoo/official/cv/resnet/README.md b/model_zoo/official/cv/resnet/README.md index 028168a8ac2..a7dccf54665 100644 --- a/model_zoo/official/cv/resnet/README.md +++ b/model_zoo/official/cv/resnet/README.md @@ -72,7 +72,7 @@ Dataset used: [ImageNet2012](http://www.image-net.org/) ## Mixed Precision -The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’. # [Environment Requirements](#contents) @@ -82,8 +82,8 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) diff --git a/model_zoo/official/cv/resnet50_quant/Readme.md b/model_zoo/official/cv/resnet50_quant/Readme.md index 2a537dbef1d..9cbed1c3a93 100644 --- a/model_zoo/official/cv/resnet50_quant/Readme.md +++ b/model_zoo/official/cv/resnet50_quant/Readme.md @@ -46,7 +46,7 @@ Dataset used: [imagenet](http://www.image-net.org/) ## [Mixed Precision](#contents) -The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorial/training/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’. # [Environment Requirements](#contents) @@ -56,8 +56,8 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Script description](#contents) diff --git a/model_zoo/official/cv/resnet_thor/README.md b/model_zoo/official/cv/resnet_thor/README.md index ef793bf1304..7598f427b06 100644 --- a/model_zoo/official/cv/resnet_thor/README.md +++ b/model_zoo/official/cv/resnet_thor/README.md @@ -50,8 +50,8 @@ The classical first-order optimization algorithm, such as SGD, has a small amoun - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) ## Quick Start After installing MindSpore via the official website, you can start training and evaluation as follows: diff --git a/model_zoo/official/cv/resnext50/README.md b/model_zoo/official/cv/resnext50/README.md index 89e8e621c05..00fbe4503b9 100644 --- a/model_zoo/official/cv/resnext50/README.md +++ b/model_zoo/official/cv/resnext50/README.md @@ -47,7 +47,7 @@ Dataset used: [imagenet](http://www.image-net.org/) ## [Mixed Precision](#contents) -The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorial/training/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’. @@ -58,8 +58,8 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Script description](#contents) diff --git a/model_zoo/official/cv/shufflenetv2/Readme.md b/model_zoo/official/cv/shufflenetv2/Readme.md index 20b43206cf0..cd6aa514832 100644 --- a/model_zoo/official/cv/shufflenetv2/Readme.md +++ b/model_zoo/official/cv/shufflenetv2/Readme.md @@ -42,8 +42,8 @@ Dataset used: [imagenet](http://www.image-net.org/) - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Script description](#contents) diff --git a/model_zoo/official/cv/ssd/README.md b/model_zoo/official/cv/ssd/README.md index cffb3ea92f8..df3ffb064a3 100644 --- a/model_zoo/official/cv/ssd/README.md +++ b/model_zoo/official/cv/ssd/README.md @@ -147,7 +147,7 @@ sh run_eval.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID] ### Training on Ascend -To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/tutorial/en/master/use/data_preparation/converting_datasets.html) files by `coco_root`(coco dataset) or `iamge_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.** +To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/converse_datasets.html) files by `coco_root`(coco dataset) or `iamge_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.** - Distribute mode diff --git a/model_zoo/official/cv/unet/README.md b/model_zoo/official/cv/unet/README.md index 03d5a569ae7..29304314720 100644 --- a/model_zoo/official/cv/unet/README.md +++ b/model_zoo/official/cv/unet/README.md @@ -57,8 +57,8 @@ Dataset used: [ISBI Challenge](http://brainiac2.mit.edu/isbi_challenge/home) - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) @@ -235,7 +235,7 @@ step: 300, loss is 0.18949677, fps is 57.63118508760329 ## [How to use](#contents) ### Inference -If you need to use the trained model to perform inference on multiple hardware platforms, such as Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/network_migration.html). Following the steps below, this is a simple example: +If you need to use the trained model to perform inference on multiple hardware platforms, such as Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorial/training/master/advanced_use/migrate_3rd_scripts.html). Following the steps below, this is a simple example: - Running on Ascend diff --git a/model_zoo/official/cv/vgg16/README.md b/model_zoo/official/cv/vgg16/README.md index d01084b6433..f2a0dffc0a4 100644 --- a/model_zoo/official/cv/vgg16/README.md +++ b/model_zoo/official/cv/vgg16/README.md @@ -78,7 +78,7 @@ here basic modules mainly include basic operation like: **3×3 conv** and **2× ## Mixed Precision -The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’. @@ -90,8 +90,8 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Quick Start](#contents) @@ -290,7 +290,7 @@ train_parallel1/log:epcoh: 2 step: 97, loss is 1.7133579 ... ... ``` -> About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training.html). +> About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/distributed_training_tutorials.html). > **Attention** This will bind the processor cores according to the `device_num` and total processor numbers. If you don't expect to run pretraining with binding processor cores, remove the operations about `taskset` in `scripts/run_distribute_train.sh` diff --git a/model_zoo/official/cv/warpctc/README.md b/model_zoo/official/cv/warpctc/README.md index cf046bfd775..fe25661d928 100644 --- a/model_zoo/official/cv/warpctc/README.md +++ b/model_zoo/official/cv/warpctc/README.md @@ -42,8 +42,8 @@ The dataset is self-generated using a third-party library called [captcha](https - Framework - [MindSpore](https://gitee.com/mindspore/mindspore) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/en/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/en/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Quick Start](#contents) @@ -161,7 +161,7 @@ Parameters for both training and evaluation can be set in config.py. ## [Training Process](#contents) -- Set options in `config.py`, including learning rate and other network hyperparameters. Click [MindSpore dataset preparation tutorial](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/loading_the_datasets.html#mindspore) for more information about dataset. +- Set options in `config.py`, including learning rate and other network hyperparameters. Click [MindSpore dataset preparation tutorial](https://www.mindspore.cn/tutorial/training/zh-CN/master/use/data_preparation.html) for more information about dataset. ### [Training](#contents) - Run `run_standalone_train.sh` for non-distributed training of WarpCTC model, either on Ascend or on GPU. diff --git a/model_zoo/official/cv/yolov3_darknet53/README.md b/model_zoo/official/cv/yolov3_darknet53/README.md index 7c293116256..c3e4bc11068 100644 --- a/model_zoo/official/cv/yolov3_darknet53/README.md +++ b/model_zoo/official/cv/yolov3_darknet53/README.md @@ -58,8 +58,8 @@ Dataset used: [COCO2014](https://cocodataset.org/#download) - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) diff --git a/model_zoo/official/cv/yolov3_darknet53_quant/README.md b/model_zoo/official/cv/yolov3_darknet53_quant/README.md index c4c714e3db1..7df42e8e988 100644 --- a/model_zoo/official/cv/yolov3_darknet53_quant/README.md +++ b/model_zoo/official/cv/yolov3_darknet53_quant/README.md @@ -60,8 +60,8 @@ Dataset used: [COCO2014](https://cocodataset.org/#download) - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) diff --git a/model_zoo/official/cv/yolov3_resnet18/README.md b/model_zoo/official/cv/yolov3_resnet18/README.md index be000279598..c1c234b8718 100644 --- a/model_zoo/official/cv/yolov3_resnet18/README.md +++ b/model_zoo/official/cv/yolov3_resnet18/README.md @@ -70,8 +70,8 @@ Dataset used: [COCO2017]() - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) @@ -135,7 +135,7 @@ After installing MindSpore via the official website, you can start training and ## [Training Process](#contents) ### Training on Ascend -To train the model, run `train.py` with the dataset `image_dir`, `anno_path` and `mindrecord_dir`. If the `mindrecord_dir` is empty, it wil generate [mindrecord](https://www.mindspore.cn/tutorial/en/master/use/data_preparation/converting_datasets.html) file by `image_dir` and `anno_path`(the absolute image path is joined by the `image_dir` and the relative path in `anno_path`). **Note if `mindrecord_dir` isn't empty, it will use `mindrecord_dir` rather than `image_dir` and `anno_path`.** +To train the model, run `train.py` with the dataset `image_dir`, `anno_path` and `mindrecord_dir`. If the `mindrecord_dir` is empty, it wil generate [mindrecord](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/converse_datasets.html) file by `image_dir` and `anno_path`(the absolute image path is joined by the `image_dir` and the relative path in `anno_path`). **Note if `mindrecord_dir` isn't empty, it will use `mindrecord_dir` rather than `image_dir` and `anno_path`.** - Stand alone mode @@ -176,7 +176,7 @@ Note the results is two-classification(person and face) used our own annotations ## [Evaluation Process](#contents) ### Evaluation on Ascend -To eval, run `eval.py` with the dataset `image_dir`, `anno_path`(eval txt), `mindrecord_dir` and `ckpt_path`. `ckpt_path` is the path of [checkpoint](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html) file. +To eval, run `eval.py` with the dataset `image_dir`, `anno_path`(eval txt), `mindrecord_dir` and `ckpt_path`. `ckpt_path` is the path of [checkpoint](https://www.mindspore.cn/tutorial/training/en/master/use/save_and_load_model.html) file. ``` sh run_eval.sh 0 yolo.ckpt ./Mindrecord_eval ./dataset ./dataset/eval.txt diff --git a/model_zoo/official/gnn/bgcf/README.md b/model_zoo/official/gnn/bgcf/README.md index 2f5f065b7d3..5e561578bde 100644 --- a/model_zoo/official/gnn/bgcf/README.md +++ b/model_zoo/official/gnn/bgcf/README.md @@ -78,8 +78,8 @@ To ultilize the strong computation power of Ascend chip, and accelerate the trai - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Quick Start](#contents) diff --git a/model_zoo/official/gnn/gat/README.md b/model_zoo/official/gnn/gat/README.md index a2c297c84de..3a560364cee 100644 --- a/model_zoo/official/gnn/gat/README.md +++ b/model_zoo/official/gnn/gat/README.md @@ -87,8 +87,8 @@ To ultilize the strong computation power of Ascend chip, and accelerate the trai - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Quick Start](#contents) diff --git a/model_zoo/official/gnn/gcn/README.md b/model_zoo/official/gnn/gcn/README.md index be1db64e842..9426dd51638 100644 --- a/model_zoo/official/gnn/gcn/README.md +++ b/model_zoo/official/gnn/gcn/README.md @@ -43,8 +43,8 @@ GCN contains two graph convolution layers. Each layer takes nodes features and a - Framework - [MindSpore](https://gitee.com/mindspore/mindspore) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/en/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/en/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Quick Start](#contents) diff --git a/model_zoo/official/lite/image_classification/README.en.md b/model_zoo/official/lite/image_classification/README.en.md index 2a53dc98b0e..a4e63485857 100644 --- a/model_zoo/official/lite/image_classification/README.en.md +++ b/model_zoo/official/lite/image_classification/README.en.md @@ -43,7 +43,7 @@ The following describes how to use the MindSpore Lite C++ APIs (Android JNIs) an ## Detailed Description of the Sample Program -This image classification sample program on the Android device includes a Java layer and a JNI layer. At the Java layer, the Android Camera 2 API is used to enable a camera to obtain image frames and process images. At the JNI layer, the model inference process is completed in [Runtime](https://www.mindspore.cn/lite/tutorial/en/master/use/runtime.html). +This image classification sample program on the Android device includes a Java layer and a JNI layer. At the Java layer, the Android Camera 2 API is used to enable a camera to obtain image frames and process images. At the JNI layer, the model inference process is completed in [Runtime](https://www.mindspore.cn/tutorial/lite/en/master/use/runtime.html). ### Sample Program Structure @@ -78,7 +78,7 @@ app ### Configuring MindSpore Lite Dependencies -When MindSpore C++ APIs are called at the Android JNI layer, related library files are required. You can use MindSpore Lite [source code compilation](https://www.mindspore.cn/lite/tutorial/en/master/build.html) to generate the MindSpore Lite version.  +When MindSpore C++ APIs are called at the Android JNI layer, related library files are required. You can use MindSpore Lite [source code compilation](https://www.mindspore.cn/tutorial/lite/en/master/use/build.html) to generate the MindSpore Lite version.  ``` android{ diff --git a/model_zoo/official/lite/image_classification/README.md b/model_zoo/official/lite/image_classification/README.md index 6c92ebdad66..83f4bf0a283 100644 --- a/model_zoo/official/lite/image_classification/README.md +++ b/model_zoo/official/lite/image_classification/README.md @@ -86,7 +86,7 @@ app ### 配置MindSpore Lite依赖项 -Android JNI层调用MindSpore C++ API时,需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html)生成"mindspore-lite-X.X.X-mindata-armXX-cpu"库文件包(包含`libmindspore-lite.so`库文件和相关头文件,可包含多个兼容架构)。 +Android JNI层调用MindSpore C++ API时,需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/build.html)生成"mindspore-lite-X.X.X-mindata-armXX-cpu"库文件包(包含`libmindspore-lite.so`库文件和相关头文件,可包含多个兼容架构)。 本示例中,build过程由download.gradle文件自动从华为服务器下载MindSpore Lite 版本文件,并放置在`app / src / main/cpp/`目录下。 diff --git a/model_zoo/official/lite/object_detection/README.md b/model_zoo/official/lite/object_detection/README.md index aeed405dff4..33b42b01066 100644 --- a/model_zoo/official/lite/object_detection/README.md +++ b/model_zoo/official/lite/object_detection/README.md @@ -44,7 +44,7 @@ ## 示例程序详细说明 -本端侧目标检测Android示例程序分为JAVA层和JNI层,其中,JAVA层主要通过Android Camera 2 API实现摄像头获取图像帧,以及相应的图像处理(针对推理结果画框)等功能;JNI层在[Runtime](https://www.mindspore.cn/tutorial/zh-CN/master/use/lite_runtime.html)中完成模型推理的过程。 +本端侧目标检测Android示例程序分为JAVA层和JNI层,其中,JAVA层主要通过Android Camera 2 API实现摄像头获取图像帧,以及相应的图像处理(针对推理结果画框)等功能;JNI层在[Runtime](https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/runtime.html)中完成模型推理的过程。 > 此处详细说明示例程序的JNI层实现,JAVA层运用Android Camera 2 API实现开启设备摄像头以及图像帧处理等功能,需读者具备一定的Android开发基础知识。 @@ -85,7 +85,7 @@ app ### 配置MindSpore Lite依赖项 -Android JNI层调用MindSpore C++ API时,需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html)生成"mindspore-lite-X.X.X-mindata-armXX-cpu"库文件包(包含`libmindspore-lite.so`库文件和相关头文件,可包含多个兼容架构)。 +Android JNI层调用MindSpore C++ API时,需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/build.html)生成"mindspore-lite-X.X.X-mindata-armXX-cpu"库文件包(包含`libmindspore-lite.so`库文件和相关头文件,可包含多个兼容架构)。 在Android Studio中将编译完成的mindspore-lite-X.X.X-mindata-armXX-cpu压缩包,解压之后放置在APP工程的`app/src/main/cpp`目录下,并在app的`build.gradle`文件中配置CMake编译支持,以及`arm64-v8a`和`armeabi-v7a`的编译支持,如下所示: ``` diff --git a/model_zoo/official/nlp/bert/README.md b/model_zoo/official/nlp/bert/README.md index f8f7568fb6a..9b0999c1331 100644 --- a/model_zoo/official/nlp/bert/README.md +++ b/model_zoo/official/nlp/bert/README.md @@ -50,8 +50,8 @@ The backbone structure of BERT is transformer. For BERT_base, the transformer co - Framework - [MindSpore](https://gitee.com/mindspore/mindspore) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/en/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/en/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Quick Start](#contents) After installing MindSpore via the official website, you can start pre-training, fine-tuning and evaluation as follows: @@ -86,7 +86,7 @@ For distributed training, an hccl configuration file with JSON format needs to b Please follow the instructions in the link below: https:gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools. -For dataset, if you want to set the format and parameters, a schema configuration file with JSON format needs to be created, please refer to [tfrecord](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/loading_the_datasets.html#tfrecord) format. +For dataset, if you want to set the format and parameters, a schema configuration file with JSON format needs to be created, please refer to [tfrecord](https://www.mindspore.cn/doc/programming_guide/zh-CN/master/dataset_loading.html#tfrecord) format. ``` For pretraining, schema file contains ["input_ids", "input_mask", "segment_ids", "next_sentence_labels", "masked_lm_positions", "masked_lm_ids", "masked_lm_weights"]. diff --git a/model_zoo/official/nlp/bert_thor/README.md b/model_zoo/official/nlp/bert_thor/README.md index a76accff33b..bb96f68d412 100644 --- a/model_zoo/official/nlp/bert_thor/README.md +++ b/model_zoo/official/nlp/bert_thor/README.md @@ -51,8 +51,8 @@ The classical first-order optimization algorithm, such as SGD, has a small amoun - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) ## Quick Start After installing MindSpore via the official website, you can start training and evaluation as follows: diff --git a/model_zoo/official/nlp/lstm/README.md b/model_zoo/official/nlp/lstm/README.md index 5c4a41a2425..5b61b126151 100644 --- a/model_zoo/official/nlp/lstm/README.md +++ b/model_zoo/official/nlp/lstm/README.md @@ -42,8 +42,8 @@ LSTM contains embeding, encoder and decoder modules. Encoder module consists of - Framework - [MindSpore](https://gitee.com/mindspore/mindspore) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/en/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/en/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Quick Start](#contents) diff --git a/model_zoo/official/nlp/mass/README.md b/model_zoo/official/nlp/mass/README.md index 564e42488e0..2eea755eae0 100644 --- a/model_zoo/official/nlp/mass/README.md +++ b/model_zoo/official/nlp/mass/README.md @@ -2,32 +2,34 @@ -- [MASS: Masked Sequence to Sequence Pre-training for Language Generation Description](#mass-description) +- [MASS: Masked Sequence to Sequence Pre-training for Language Generation Description](#mass-masked-sequence-to-sequence-pre-training-for-language-generation-description) - [Model Architecture](#model-architecture) - [Dataset](#dataset) - [Features](#features) - [Script description](#script-description) - - [Data Preparation](#Data-Preparation) - - [Tokenization](#Tokenization) - - [Byte Pair Encoding](#Byte-Pair-Encoding) - - [Build Vocabulary](#Build-Vocabulary) - - [Generate Dataset](#Generate-Dataset) - - [News Crawl Corpus](#News-Crawl-Corpus) - - [Gigaword Corpus](#Gigaword-Corpus) - - [Cornell Movie Dialog Corpus](#Cornell-Movie-Dialog-Corpus) - - [Configuration](#Configuration) - - [Training & Evaluation process](#Training-&-Evaluation-process) - - [Weights average](#Weights-average) - - [Learning rate scheduler](#Learning-rate-scheduler) + - [Data Preparation](#data-preparation) + - [Tokenization](#tokenization) + - [Byte Pair Encoding](#byte-pair-encoding) + - [Build Vocabulary](#build-vocabulary) + - [Generate Dataset](#generate-dataset) + - [News Crawl Corpus](#news-crawl-corpus) + - [Gigaword Corpus](#gigaword-corpus) + - [Cornell Movie Dialog Corpus](#cornell-movie-dialog-corpus) + - [Configuration](#configuration) + - [Training & Evaluation process](#training--evaluation-process) + - [Weights average](#weights-average) + - [Learning rate scheduler](#learning-rate-scheduler) - [Environment Requirements](#environment-requirements) - - [Platform](#Platform) - - [Requirements](#Requirements) + - [Platform](#platform) + - [Requirements](#requirements) - [Get started](#get-started) - - [Pre-training](#Pre-training) - - [Fine-tuning](#Fine-tuning) - - [Inference](#Inference) + - [Pre-training](#pre-training) + - [Fine-tuning](#fine-tuning) + - [Inference](#inference) - [Performance](#performance) - [Results](#results) + - [Fine-Tuning on Text Summarization](#fine-tuning-on-text-summarization) + - [Fine-Tuning on Conversational ResponseGeneration](#fine-tuning-on-conversational-responsegeneration) - [Training Performance](#training-performance) - [Inference Performance](#inference-performance) - [Description of random situation](#description-of-random-situation) @@ -474,8 +476,8 @@ More detail about LR scheduler could be found in `src/utils/lr_scheduler.py`. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) ## Requirements @@ -486,7 +488,7 @@ subword-nmt rouge ``` -https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/network_migration.html +https://www.mindspore.cn/tutorial/training/en/master/advanced_use/migrate_3rd_scripts.html # Get started @@ -542,7 +544,7 @@ sh run_gpu.sh -t t -n 1 -i 1 -c config/config.json Get the log and output files under the path `./train_mass_*/`, and the model file under the path assigned in the `config/config.json` file. ## Inference -If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/network_migration.html). +If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/migrate_3rd_scripts.html). For inference, config the options in `config.json` firstly: - Assign the `test_dataset` under `dataset_config` node to the dataset path. - Assign the `existed_ckpt` under `checkpoint_path` node to the model file produced by fine-tuning. diff --git a/model_zoo/official/nlp/tinybert/README.md b/model_zoo/official/nlp/tinybert/README.md index fcaaa42472c..a4b08f8aed6 100644 --- a/model_zoo/official/nlp/tinybert/README.md +++ b/model_zoo/official/nlp/tinybert/README.md @@ -50,8 +50,8 @@ The backbone structure of TinyBERT is transformer, the transformer contains four - Framework - [MindSpore](https://gitee.com/mindspore/mindspore) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/en/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/en/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Quick Start](#contents) After installing MindSpore via the official website, you can start general distill, task distill and evaluation as follows: @@ -80,7 +80,7 @@ For distributed training on Ascend, a hccl configuration file with JSON format n Please follow the instructions in the link below: https:gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools. -For dataset, if you want to set the format and parameters, a schema configuration file with JSON format needs to be created, please refer to [tfrecord](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/loading_the_datasets.html#tfrecord) format. +For dataset, if you want to set the format and parameters, a schema configuration file with JSON format needs to be created, please refer to [tfrecord](https://www.mindspore.cn/doc/programming_guide/zh-CN/master/dataset_loading.html#tfrecord) format. ``` For general task, schema file contains ["input_ids", "input_mask", "segment_ids"]. diff --git a/model_zoo/official/nlp/transformer/README.md b/model_zoo/official/nlp/transformer/README.md index 59439f42320..a45f2f84ec5 100644 --- a/model_zoo/official/nlp/transformer/README.md +++ b/model_zoo/official/nlp/transformer/README.md @@ -44,8 +44,8 @@ Specifically, Transformer contains six encoder modules and six decoder modules. - Framework - [MindSpore](https://gitee.com/mindspore/mindspore) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/en/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/en/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Quick Start](#contents) @@ -191,7 +191,7 @@ Parameters for learning rate: ## [Training Process](#contents) -- Set options in `config.py`, including loss_scale, learning rate and network hyperparameters. Click [here](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/loading_the_datasets.html#mindspore) for more information about dataset. +- Set options in `config.py`, including loss_scale, learning rate and network hyperparameters. Click [here](https://www.mindspore.cn/tutorial/training/zh-CN/master/use/data_preparation.html) for more information about dataset. - Run `run_standalone_train_ascend.sh` for non-distributed training of Transformer model. diff --git a/model_zoo/official/recommend/deepfm/README.md b/model_zoo/official/recommend/deepfm/README.md index ca066e41f9b..8891fafff41 100644 --- a/model_zoo/official/recommend/deepfm/README.md +++ b/model_zoo/official/recommend/deepfm/README.md @@ -44,8 +44,8 @@ The FM and deep component share the same input raw feature vector, which enables - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) diff --git a/model_zoo/official/recommend/wide_and_deep/README.md b/model_zoo/official/recommend/wide_and_deep/README.md index 1ed9c881f7b..5049ecdaae4 100644 --- a/model_zoo/official/recommend/wide_and_deep/README.md +++ b/model_zoo/official/recommend/wide_and_deep/README.md @@ -43,8 +43,8 @@ Currently we support host-device mode with column partition and parameter serve - Framework - [MindSpore](https://gitee.com/mindspore/mindspore) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/en/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/en/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) diff --git a/model_zoo/official/recommend/wide_and_deep_multitable/README.md b/model_zoo/official/recommend/wide_and_deep_multitable/README.md index 40bd233394f..8ead47bcd9b 100644 --- a/model_zoo/official/recommend/wide_and_deep_multitable/README.md +++ b/model_zoo/official/recommend/wide_and_deep_multitable/README.md @@ -36,8 +36,8 @@ Wide&Deep model jointly trained wide linear models and deep neural network, whic - Framework - [MindSpore](https://gitee.com/mindspore/mindspore) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/en/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/en/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) diff --git a/model_zoo/research/cv/ghostnet/Readme.md b/model_zoo/research/cv/ghostnet/Readme.md index bd5004accc3..6bbfe5e17be 100644 --- a/model_zoo/research/cv/ghostnet/Readme.md +++ b/model_zoo/research/cv/ghostnet/Readme.md @@ -45,8 +45,8 @@ Dataset used: [Oxford-IIIT Pet](https://www.robots.ox.ac.uk/~vgg/data/pets/) - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Script description](#contents) diff --git a/model_zoo/research/cv/resnet50_adv_pruning/Readme.md b/model_zoo/research/cv/resnet50_adv_pruning/Readme.md index ee4bc6b7431..59c4e8395f5 100644 --- a/model_zoo/research/cv/resnet50_adv_pruning/Readme.md +++ b/model_zoo/research/cv/resnet50_adv_pruning/Readme.md @@ -38,8 +38,8 @@ Dataset used: [Oxford-IIIT Pet](https://www.robots.ox.ac.uk/~vgg/data/pets/) - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Script description](#contents) diff --git a/model_zoo/research/cv/ssd_ghostnet/README.md b/model_zoo/research/cv/ssd_ghostnet/README.md index a4a68db118a..2663ae469cd 100644 --- a/model_zoo/research/cv/ssd_ghostnet/README.md +++ b/model_zoo/research/cv/ssd_ghostnet/README.md @@ -25,8 +25,8 @@ Dataset used: [COCO2017]() - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) - Install [MindSpore](https://www.mindspore.cn/install/en). @@ -134,7 +134,7 @@ python eval.py --device_id 0 --dataset coco --checkpoint_path LOG4/ssd-500_458.c ### Training on Ascend -To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/tutorial/en/master/use/data_preparation/converting_datasets.html) files by `coco_root`(coco dataset) or `iamge_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.** +To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/converse_datasets.html) files by `coco_root`(coco dataset) or `iamge_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.** - Distribute mode