fix link errors in mindspore

This commit is contained in:
yingchen 2021-08-26 11:53:14 +08:00
parent 07eaa1969b
commit 95b4aaecd0
9 changed files with 14 additions and 14 deletions

View File

@ -240,7 +240,7 @@ please check out [docker](https://gitee.com/mindspore/mindspore/blob/master/dock
## Quickstart ## Quickstart
See the [Quick Start](https://www.mindspore.cn/docs/programming_guide/en/master/quick_start/quick_start.html) See the [Quick Start](https://www.mindspore.cn/tutorials/en/master/quick_start.html)
to implement the image classification. to implement the image classification.
## Docs ## Docs

View File

@ -236,7 +236,7 @@ MindSpore的Docker镜像托管在[Docker Hub](https://hub.docker.com/r/mindspore
## 快速入门 ## 快速入门
参考[快速入门](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/quick_start/quick_start.html)实现图片分类。 参考[快速入门](https://www.mindspore.cn/tutorials/zh-CN/master/quick_start.html)实现图片分类。
## 文档 ## 文档

View File

@ -767,7 +767,7 @@ if platform.system().lower() != 'windows':
""" """
Replace a part of UTF-8 string tensor with given text according to regular expressions. Replace a part of UTF-8 string tensor with given text according to regular expressions.
See http://userguide.icu-project.org/strings/regexp for supported regex pattern. See https://unicode-org.github.io/icu/userguide/strings/regexp.html for supported regex pattern.
Note: Note:
RegexReplace is not supported on Windows platform yet. RegexReplace is not supported on Windows platform yet.
@ -799,7 +799,7 @@ if platform.system().lower() != 'windows':
""" """
Tokenize a scalar tensor of UTF-8 string by regex expression pattern. Tokenize a scalar tensor of UTF-8 string by regex expression pattern.
See http://userguide.icu-project.org/strings/regexp for supported regex pattern. See https://unicode-org.github.io/icu/userguide/strings/regexp.html for supported regex pattern.
Note: Note:
RegexTokenizer is not supported on Windows platform yet. RegexTokenizer is not supported on Windows platform yet.

View File

@ -42,7 +42,7 @@ MDP requires MindSpore version 0.7.0-beta or later. MDP is actively evolving. In
### Bayesian Neural Network ### Bayesian Neural Network
1. Process the required dataset. The MNIST dateset is used in the example. Data processing is consistent with [Implementing an Image Classification Application](https://www.mindspore.cn/docs/programming_guide/en/master/quick_start/quick_start.html) in Tutorial. 1. Process the required dataset. The MNIST dateset is used in the example. Data processing is consistent with [Implementing an Image Classification Application](https://www.mindspore.cn/tutorials/en/master/quick_start.html) in Tutorial.
2. Define a Bayesian Neural Network. The bayesian LeNet is used in this example. 2. Define a Bayesian Neural Network. The bayesian LeNet is used in this example.
@ -227,7 +227,7 @@ optimizer = nn.Adam(params=vae.trainable_params(), learning_rate=0.001)
net_with_loss = nn.WithLossCell(vae, net_loss) net_with_loss = nn.WithLossCell(vae, net_loss)
``` ```
3. Process the required dataset. The MNIST dateset is used in the example. Data processing is consistent with [Implementing an Image Classification Application](https://www.mindspore.cn/docs/programming_guide/en/master/quick_start/quick_start.html) in Tutorial. 3. Process the required dataset. The MNIST dateset is used in the example. Data processing is consistent with [Implementing an Image Classification Application](https://www.mindspore.cn/tutorials/en/master/quick_start.html) in Tutorial.
4. Use SVI interface to train VAE network. vi.run can return the trained network, get_train_loss can get the loss after training. 4. Use SVI interface to train VAE network. vi.run can return the trained network, get_train_loss can get the loss after training.
```python ```python
@ -437,7 +437,7 @@ if __name__ == "__main__":
The uncertainty estimation toolbox is based on MindSpore Deep Probabilistic Programming (MDP), and it is suitable for mainstream deep learning models, such as regression, classification, target detection and so on. In the inference stage, with the uncertainy estimation toolbox, developers only need to pass in the trained model and training dataset, specify the task and the samples to be estimated, then can obtain the aleatoric uncertainty and epistemic uncertainty. Based the uncertainty information, developers can understand the model and the dataset better. The uncertainty estimation toolbox is based on MindSpore Deep Probabilistic Programming (MDP), and it is suitable for mainstream deep learning models, such as regression, classification, target detection and so on. In the inference stage, with the uncertainy estimation toolbox, developers only need to pass in the trained model and training dataset, specify the task and the samples to be estimated, then can obtain the aleatoric uncertainty and epistemic uncertainty. Based the uncertainty information, developers can understand the model and the dataset better.
In classification task, for example, the model is lenet model. The MNIST dateset is used in the example. Data processing is consistent with [Implementing an Image Classification Application](https://www.mindspore.cn/docs/programming_guide/en/master/quick_start/quick_start.html) in Tutorial. For evaluating the uncertainty of test examples, the use of the toolbox is as follows: In classification task, for example, the model is lenet model. The MNIST dateset is used in the example. Data processing is consistent with [Implementing an Image Classification Application](https://www.mindspore.cn/tutorials/en/master/quick_start.html) in Tutorial. For evaluating the uncertainty of test examples, the use of the toolbox is as follows:
```python ```python
from mindspore.nn.probability.toolbox.uncertainty_evaluation import UncertaintyEvaluation from mindspore.nn.probability.toolbox.uncertainty_evaluation import UncertaintyEvaluation

View File

@ -56,7 +56,7 @@ SE-ResNeXt的总体网络架构如下 [链接](https://arxiv.org/abs/1709.015
## 混合精度 ## 混合精度
采用[混合精度](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/enable_mixed_precision.html) 的训练方法,使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。 采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) 的训练方法,使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
# 环境要求 # 环境要求
@ -65,8 +65,8 @@ SE-ResNeXt的总体网络架构如下 [链接](https://arxiv.org/abs/1709.015
- 框架 - 框架
- [MindSpore](https://www.mindspore.cn/install/en) - [MindSpore](https://www.mindspore.cn/install/en)
- 如需查看详情,请参见如下资源: - 如需查看详情,请参见如下资源:
- [MindSpore教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html) - [MindSpore教程](https://www.mindspore.cn/tutorials/zh-CN/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) - [MindSpore Python API](https://www.mindspore.cn/docs/api/zh-CN/master/index.html)
# 快速入门 # 快速入门

View File

@ -186,7 +186,7 @@ bash run_distribute_train.sh DEVICE_ID EPOCH_SIZE LR DATASET PRE_TRAINED(optiona
> 注意: > 注意:
RANK_TABLE_FILE相关参考资料见[链接](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/distributed_training_ascend.html), 获取device_ip方法详见[链接](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools). RANK_TABLE_FILE相关参考资料见[链接](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training_ascend.html), 获取device_ip方法详见[链接](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools).
#### 运行 #### 运行

View File

@ -432,7 +432,7 @@ train_parallel1/log:epcoh: 2 step: 97, loss is 1.7133579
... ...
``` ```
> About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/distributed_training_tutorials.html). > About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/tutorials/en/master/distributed_training.html).
> **Attention** This will bind the processor cores according to the `device_num` and total processor numbers. If you don't expect to run pretraining with binding processor cores, remove the operations about `taskset` in `scripts/run_distribute_train.sh` > **Attention** This will bind the processor cores according to the `device_num` and total processor numbers. If you don't expect to run pretraining with binding processor cores, remove the operations about `taskset` in `scripts/run_distribute_train.sh`
##### Run vgg19 on GPU ##### Run vgg19 on GPU

View File

@ -448,7 +448,7 @@ train_parallel1/log:epcoh: 2 step: 97, loss is 1.7133579
... ...
``` ```
> 关于rank_table.json可以参考[分布式并行训练](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/distributed_training_tutorials.html)。 > 关于rank_table.json可以参考[分布式并行训练](https://www.mindspore.cn/tutorials/zh-CN/master/distributed_training.html)。
> **注意** 将根据`device_num`和处理器总数绑定处理器核。如果您不希望预训练中绑定处理器内核,请在`scripts/run_distribute_train.sh`脚本中移除`taskset`相关操作。 > **注意** 将根据`device_num`和处理器总数绑定处理器核。如果您不希望预训练中绑定处理器内核,请在`scripts/run_distribute_train.sh`脚本中移除`taskset`相关操作。
##### GPU处理器环境运行VGG19 ##### GPU处理器环境运行VGG19

View File

@ -404,7 +404,7 @@ YOLOv3-tiny应用于118000张图像上标注和数据格式必须与COCO 2017
| 速度 | 单卡130imgs/s; 8卡980imgs/s | | 速度 | 单卡130imgs/s; 8卡980imgs/s |
| 总时长 | 8卡: 10小时 | | 总时长 | 8卡: 10小时 |
| 参数(M) | 69 | | 参数(M) | 69 |
| 脚本 | [YOLOv3_Tiny脚本](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/YOLOv3_Tiny) | | 脚本 | [YOLOv3_Tiny脚本](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/cv/yolov3_tiny) |
### 推理性能 ### 推理性能