vgg16 readme update

This commit is contained in:
CaoJian 2020-08-29 16:20:46 +08:00
parent e6a4d932b4
commit e512fac44f
1 changed files with 2 additions and 1 deletions

View File

@ -79,6 +79,7 @@ here basic modules mainly include basic operation like: **3×3 conv** and **2×
## Mixed Precision
The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching reduce precision.
@ -370,4 +371,4 @@ after allreduce eval: top5_correct=45582, tot=50000, acc=91.16%
In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py.
# [ModelZoo Homepage](#contents)
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).