From cfbe020c0fac981f6367fd47ee063552ad3a806b Mon Sep 17 00:00:00 2001 From: lvmingfu Date: Tue, 24 Aug 2021 09:52:12 +0800 Subject: [PATCH] fix error links for master --- model_zoo/official/cv/east/README.md | 8 ++++---- model_zoo/research/audio/deepspeech2/README-CN.md | 4 ++-- model_zoo/research/audio/deepspeech2/README.md | 4 ++-- model_zoo/research/nlp/atae_lstm/README.md | 6 +++--- 4 files changed, 11 insertions(+), 11 deletions(-) diff --git a/model_zoo/official/cv/east/README.md b/model_zoo/official/cv/east/README.md index 13f7dcefbd3..3064033c606 100644 --- a/model_zoo/official/cv/east/README.md +++ b/model_zoo/official/cv/east/README.md @@ -41,12 +41,12 @@ Dataset used [ICDAR 2015](https://rrc.cvc.uab.es/?ch=4&com=downloads) # [Environment Requirements](#contents) - Hardware(Ascend) - - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) - - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorials/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/docs/api/en/master/index.html) # [Script description](#contents) @@ -88,7 +88,7 @@ sh run_eval_ascend.sh [DATASET_PATH] [CKPT_PATH] [DEVICE_ID] ``` > Notes: -> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/distributed_training_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size. +> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size. > > This is processor cores binding operation regarding the `device_num` and total processor numbers. If you are not expect to do it, remove the operations `taskset` in `scripts/run_distribute_train.sh` > diff --git a/model_zoo/research/audio/deepspeech2/README-CN.md b/model_zoo/research/audio/deepspeech2/README-CN.md index b0c5c3212aa..af1443c2cd7 100644 --- a/model_zoo/research/audio/deepspeech2/README-CN.md +++ b/model_zoo/research/audio/deepspeech2/README-CN.md @@ -61,8 +61,8 @@ DeepSpeech2是一个使用 CTC 损失训练的语音识别模型。它用神经 - 框架 - [MindSpore](https://www.mindspore.cn/install/en) - 通过下面网址可以获得更多信息: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) - - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) + - [MindSpore tutorials](https://www.mindspore.cn/tutorials/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/docs/api/zh-CN/master/index.html) # [文件说明和运行说明](#contents) diff --git a/model_zoo/research/audio/deepspeech2/README.md b/model_zoo/research/audio/deepspeech2/README.md index b05024a9264..e32bd8dbf2f 100644 --- a/model_zoo/research/audio/deepspeech2/README.md +++ b/model_zoo/research/audio/deepspeech2/README.md @@ -59,8 +59,8 @@ Dataset used: [LibriSpeech]() - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) - - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) + - [MindSpore tutorials](https://www.mindspore.cn/tutorials/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/docs/api/en/master/index.html) # [Script Description](#contents) diff --git a/model_zoo/research/nlp/atae_lstm/README.md b/model_zoo/research/nlp/atae_lstm/README.md index d89af44b888..27901d433e5 100644 --- a/model_zoo/research/nlp/atae_lstm/README.md +++ b/model_zoo/research/nlp/atae_lstm/README.md @@ -54,7 +54,7 @@ AttentionLSTM模型的输入由aspect和word向量组成,输入部分输入单 ## 混合精度 -采用[混合精度](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。 +采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。 以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。 # 环境要求 @@ -64,8 +64,8 @@ AttentionLSTM模型的输入由aspect和word向量组成,输入部分输入单 - 框架 - [MindSpore](https://www.mindspore.cn/install/en) - 如需查看详情,请参见如下资源: - - [MindSpore教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html) - - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) + - [MindSpore教程](https://www.mindspore.cn/tutorials/zh-CN/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/docs/api/zh-CN/master/index.html) # 快速入门