!6008 [VM][Quant]Fix readme bug of quant network in model zoo

Merge pull request !6008 from chenfei_mindspore/fix-readme-of-mobilenetv2
This commit is contained in:
mindspore-ci-bot 2020-09-11 09:16:02 +08:00 committed by Gitee
commit 881670428d
5 changed files with 14 additions and 165 deletions

View File

@ -58,7 +58,7 @@ Dataset used: [MNIST](<http://yann.lecun.com/exdb/mnist/>)
- Hardware:Ascend
- Prepare hardware environment with Ascend
- Framework
- [MindSpore](http://10.90.67.50/mindspore/archive/20200506/OpenSource/me_vm_x86/)
- [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)

View File

@ -10,11 +10,10 @@
- [Script and Sample Code](#script-and-sample-code)
- [Training Process](#training-process)
- [Evaluation Process](#evaluation-process)
- [Evaluation](#evaluation)
- [Model Description](#model-description)
- [Performance](#performance)
- [Training Performance](#evaluation-performance)
- [Inference Performance](#evaluation-performance)
- [Training Performance](#training-performance)
- [Evaluation Performance](#evaluation-performance)
- [Description of Random Situation](#description-of-random-situation)
- [ModelZoo Homepage](#modelzoo-homepage)
@ -46,7 +45,7 @@ Dataset used: [imagenet](http://www.image-net.org/)
# [Features](#contents)
## [Mixed Precision(Ascend)](#contents)
## [Mixed Precision](#contents)
The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching reduce precision.
@ -56,7 +55,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil
- Hardware:Ascend
- Prepare hardware environment with Ascend. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
- Framework
- [MindSpore](http://10.90.67.50/mindspore/archive/20200506/OpenSource/me_vm_x86/)
- [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
@ -70,8 +69,8 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil
├── mobileNetv2_quant
├── Readme.md # descriptions about MobileNetV2-Quant
├── scripts
│ ├──run_train.sh # shell script for train on Ascend and GPU
│ ├──run_infer_quant.sh # shell script for evaluation on Ascend
│ ├──run_train.sh # shell script for train on Ascend
│ ├──run_infer.sh # shell script for evaluation on Ascend
├── src
│ ├──config.py # parameter configuration
│ ├──dataset.py # creating dataset
@ -115,7 +114,7 @@ epoch: [ 1/200], step:[ 624/ 625], loss:[3.917/3.917], time:[138221.250], lr:
epoch time: 138331.250, per step time: 221.330, avg loss: 3.917
```
## [Eval process](#contents)
## [Evaluation process](#contents)
### Usage

View File

@ -23,76 +23,6 @@ get_real_path(){
}
# check_and_get_Ascend_device(){
# #device_list=(${1//,/ })
# IFS=',' read -ra device_list <<<"$1"
# last_device_id=0
# first_device_id=8
# device_used=(0 0 0 0 0 0 0 0)
# for var in "${device_list[@]}"
# do
# if [ $((var)) -lt 0 ] || [ $((var)) -ge 8 ]
# then
# echo "error: device id=${var} is incorrect, device id must be in range [0,8), please check your device id list!"
# exit 1
# fi
# if [ ${device_used[$((var))]} -eq 0 ]
# then
# device_used[ $((var)) ]=1
# else
# echo "error: device id is duplicate, please check your device id list!"
# exit 1
# fi
# if [ ${last_device_id} \< $((var)) ]
# then
# last_device_id=$((var))
# fi
# if [ ${first_device_id} \> $((var)) ]
# then
# first_device_id=$((var))
# fi
# done
# device_num=`expr ${last_device_id} - ${first_device_id} + 1`
# if [ ${device_num} != ${#device_list[*]} ]
# then
# echo "error: the Ascend chips used must be continuous, please check your device id list!"
# exit 1
# fi
# if [ ${first_device_id} -lt 4 ] && [ ${last_device_id} -ge 4 ]
# then
# if [ ${first_device_id} != 0 ] || [ ${last_device_id} != 7 ]
# then
# echo "error: device id list must be in the same group of [0,4) or [4,8) when using Ascend chips."
# exit 1
# fi
# fi
# echo "${first_device_id},`expr ${last_device_id} + 1`"
# }
# get_hccl_name(){
# server_ip=$(ifconfig -a | grep inet | grep -v 127.0.0.1 | grep -v inet6 | awk '{print $2}' | tr -d "addr:")
# device_num=`expr $2 - $1`
# device_id_list=""
# for(( i=$1 ; i < $2 ; i++ ))
# do
# device_id_list=${device_id_list}$i
# done
# hccl_name="hccl_${device_num}p_${device_id_list}_${server_ip}.json"
# echo ${hccl_name}
# }
get_gpu_device_num(){
#device_list=(${1//,/ })
@ -125,46 +55,6 @@ run_ascend(){
echo "Usage: bash run_train.sh [Ascend] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)\n "
exit 1
fi
#first_last_device=$(check_and_get_Ascend_device $2)
#devices=(${first_last_device//,/ })
#IFS=',' read -ra devices <<<"${first_last_device}"
# first_device=${first_last_device:0:1}
# last_device=${first_last_device:2:1}
# device_num=`expr $((last_device)) - $((first_device))`
#single ascend or multiple ascend
# if [ ${device_num} -gt 1 ]
# then
# ori_path=$(dirname "$(readlink -f "$0" )")
# #generate hccl config file
# cd ../../../../utils/hccl_tools/ || exit
# device_num_arg="[${first_device},${last_device})"
# python hccl_tools.py --device_num=${device_num_arg}
# hccl_name=$(get_hccl_name ${first_device} ${last_device})
# if [ ! -e ${hccl_name} ]
# then
# echo "error: failed to generate the hccl config file!"
# exit 1
# fi
# mv ${hccl_name} ${ori_path}
# cd ${ori_path} || exit
# PATH1=$(get_real_path ${hccl_name})
# if [ ! -f $PATH1 ]
# then
# echo "error: RANK_TABLE_FILE=$PATH1 is not a file"
# exit 1
# fi
# export RANK_TABLE_FILE=$PATH1
# fi
PATH1=$(get_real_path $2)
PATH2=$(get_real_path $3)

View File

@ -1,43 +1,4 @@
# Contents
# ResNet-50_quant Example
## Description
This is an example of training ResNet-50_quant with ImageNet2012 dataset in MindSpore.
## Requirements
- Install [MindSpore](https://www.mindspore.cn/install/en).
- Download the dataset ImageNet2012
> Unzip the ImageNet2012 dataset to any path you want and the folder structure should include train and eval dataset as follows:
> ```
> .
> ├── ilsvrc # train dataset
> └── ilsvrc_eval # infer dataset: images should be classified into 1000 directories firstly, just like train images
> ```
## Example structure
```shell
resnet50_quant/
├── eval.py
├── models
│   └── resnet_quant.py
├── Readme.md
├── scripts
│   ├── run_infer.sh
│   └── run_train.sh
├── src
│   ├── config.py
│   ├── crossentropy.py
│   ├── dataset.py
│   ├── launch.py
│   └── lr_generator.py
└── train.py
```
- [resnet50 Description](#resnet50-description)
- [Model Architecture](#model-architecture)
@ -49,17 +10,16 @@ resnet50_quant/
- [Script and Sample Code](#script-and-sample-code)
- [Training Process](#training-process)
- [Evaluation Process](#evaluation-process)
- [Evaluation](#evaluation)
- [Model Description](#model-description)
- [Performance](#performance)
- [Training Performance](#evaluation-performance)
- [Inference Performance](#evaluation-performance)
- [Training Performance](#training-performance)
- [Evaluation Performance](#evaluation-performance)
- [Description of Random Situation](#description-of-random-situation)
- [ModelZoo Homepage](#modelzoo-homepage)
# [resnet50 Description](#contents)
ResNet-50 is a convolutional neural network that is 50 layers deep, which can classify ImageNet image nto 1000 object categories with 76% accuracy.
ResNet-50 is a convolutional neural network that is 50 layers deep, which can classify ImageNet image to 1000 object categories with 76% accuracy.
[Paper](https://arxiv.org/abs/1512.03385) Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun."Deep Residual Learning for Image Recognition." He, Kaiming , et al. "Deep Residual Learning for Image Recognition." IEEE Conference on Computer Vision & Pattern Recognition IEEE Computer Society, 2016.
@ -84,7 +44,7 @@ Dataset used: [imagenet](http://www.image-net.org/)
# [Features](#contents)
## [Mixed Precision(Ascend)](#contents)
## [Mixed Precision](#contents)
The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching reduce precision.
@ -94,7 +54,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil
- Hardware:Ascend
- Prepare hardware environment with Ascend. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
- Framework
- [MindSpore](http://10.90.67.50/mindspore/archive/20200506/OpenSource/me_vm_x86/)
- [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
@ -147,7 +107,7 @@ epoch: 4 step: 5004, loss is 3.2795618
epoch: 5 step: 5004, loss is 3.1978393
```
## [Eval process](#contents)
## [Evaluation process](#contents)
### Usage