forked from mindspore-Ecosystem/mindspore
!5363 Modify for resnet readme and fix bool type optional
Merge pull request !5363 from qujianwei/r0.7
This commit is contained in:
commit
23fc178a5a
|
@ -1,40 +1,127 @@
|
|||
# ResNet Example
|
||||
# Contents
|
||||
|
||||
- [ResNet Description](#resnet-description)
|
||||
- [Model Architecture](#model-architecture)
|
||||
- [Dataset](#dataset)
|
||||
- [Features](#features)
|
||||
- [Mixed Precision](#mixed-precision)
|
||||
- [Environment Requirements](#environment-requirements)
|
||||
- [Quick Start](#quick-start)
|
||||
- [Script Description](#script-description)
|
||||
- [Script and Sample Code](#script-and-sample-code)
|
||||
- [Script Parameters](#script-parameters)
|
||||
- [Training Process](#training-process)
|
||||
- [Training](#training)
|
||||
- [Distributed Training](#distributed-training)
|
||||
- [Evaluation Process](#evaluation-process)
|
||||
- [Evaluation](#evaluation)
|
||||
- [Model Description](#model-description)
|
||||
- [Performance](#performance)
|
||||
- [Evaluation Performance](#evaluation-performance)
|
||||
- [Description of Random Situation](#description-of-random-situation)
|
||||
- [ModelZoo Homepage](#modelzoo-homepage)
|
||||
|
||||
|
||||
# [ResNet Description](#contents)
|
||||
## Description
|
||||
ResNet (residual neural network) was proposed by Kaiming He and other four Chinese of Microsoft Research Institute. Through the use of ResNet unit, it successfully trained 152 layers of neural network, and won the championship in ilsvrc2015. The error rate on top 5 was 3.57%, and the parameter quantity was lower than vggnet, so the effect was very outstanding. Traditional convolution network or full connection network will have more or less information loss. At the same time, it will lead to the disappearance or explosion of gradient, which leads to the failure of deep network training. ResNet solves this problem to a certain extent. By passing the input information to the output, the integrity of the information is protected. The whole network only needs to learn the part of the difference between input and output, which simplifies the learning objectives and difficulties.The structure of ResNet can accelerate the training of neural network very quickly, and the accuracy of the model is also greatly improved. At the same time, ResNet is very popular, even can be directly used in the concept net network.
|
||||
|
||||
These are examples of training ResNet-50/ResNet-101 with CIFAR-10/ImageNet2012 dataset in MindSpore.
|
||||
(Training ResNet-101 with dataset CIFAR-10 is unsupported now.)
|
||||
These are examples of training ResNet-50/ResNet-101/SE-ResNet50 with CIFAR-10/ImageNet2012 dataset in MindSpore.ResNet-50 and ResNet101 can reference parper 1 below, and SE-ResNet-50 is a variant of ResNet-50 which reference paper [link](https://arxiv.org/abs/1709.01507) and [link](https://arxiv.org/abs/1812.01187) below, Training SE-ResNet-50 for just 24 epochs using 8 Ascend 910, we can reach top-1 accuracy of 75.9%.(Training ResNet-101 with dataset CIFAR-10 and SE-ResNet50 with CIFAR-10 is is not supported yet.)
|
||||
|
||||
## Requirements
|
||||
## Paper
|
||||
1.(https://arxiv.org/pdf/1512.03385.pdf):Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. "Deep Residual Learning for Image Recognition"
|
||||
|
||||
- Install [MindSpore](https://www.mindspore.cn/install/en).
|
||||
2.(https://arxiv.org/abs/1709.01507):Jie Hu, Li Shen, Samuel Albanie, Gang Sun, Enhua Wu. "Squeeze-and-Excitation Networks"
|
||||
|
||||
- Download the dataset CIFAR-10 or ImageNet2012
|
||||
3.(https://arxiv.org/abs/1812.01187):Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, Mu Li. "Bag of Tricks for Image Classification with Convolutional Neural Networks"
|
||||
|
||||
CIFAR-10
|
||||
# [Model Architecture](#contents)
|
||||
|
||||
> Unzip the CIFAR-10 dataset to any path you want and the folder structure should include train and eval dataset as follows:
|
||||
> ```
|
||||
> .
|
||||
> └─dataset
|
||||
> ├─ cifar-10-batches-bin # train dataset
|
||||
> └─ cifar-10-verify-bin # evaluate dataset
|
||||
> ```
|
||||
The overall network architecture of ResNet is show below:
|
||||
[Link](https://arxiv.org/pdf/1512.03385.pdf)
|
||||
|
||||
ImageNet2012
|
||||
# [Dataset](#contents)
|
||||
|
||||
> Unzip the ImageNet2012 dataset to any path you want and the folder should include train and eval dataset as follows:
|
||||
>
|
||||
> ```
|
||||
> .
|
||||
> └─dataset
|
||||
> ├─ilsvrc # train dataset
|
||||
> └─validation_preprocess # evaluate dataset
|
||||
> ```
|
||||
Dataset used: [CIFAR-10](<http://www.cs.toronto.edu/~kriz/cifar.html>)
|
||||
- Dataset size:175M,60,000 32*32 colorful images in 10 classes
|
||||
- Train:146M,50,000 images
|
||||
- Test:29.3M,10,000 images
|
||||
- Data format:binary files
|
||||
- Note:Data will be processed in dataset.py
|
||||
- Download the dataset, the directory structure is as follows:
|
||||
|
||||
```
|
||||
├─cifar-10-batches-bin
|
||||
│
|
||||
└─cifar-10-verify-bin
|
||||
```
|
||||
|
||||
Dataset used: [imagenet](http://www.image-net.org/)
|
||||
|
||||
- Dataset size: ~125G, 1.2W colorful images in 1000 classes
|
||||
- Train: 120G, 1.2W images
|
||||
- Test: 5G, 50000 images
|
||||
- Data format: RGB images.
|
||||
- Note: Data will be processed in src/dataset.py
|
||||
Download the dataset CIFAR-10 or ImageNet2012
|
||||
|
||||
```
|
||||
└─dataset
|
||||
├─ilsvrc # train dataset
|
||||
└─validation_preprocess # evaluate dataset
|
||||
```
|
||||
|
||||
# [Features](#contents)
|
||||
|
||||
## Mixed Precision
|
||||
|
||||
The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
|
||||
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
|
||||
|
||||
# [Environment Requirements](#contents)
|
||||
|
||||
- Hardware(Ascend/GPU)
|
||||
- Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
|
||||
- Framework
|
||||
- [MindSpore](https://www.mindspore.cn/install/en)
|
||||
- For more information, please check the resources below:
|
||||
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
|
||||
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
|
||||
|
||||
|
||||
|
||||
## Structure
|
||||
# [Quick Start](#contents)
|
||||
|
||||
After installing MindSpore via the official website, you can start training and evaluation as follows:
|
||||
|
||||
- runing on Ascend
|
||||
```
|
||||
# distributed training
|
||||
Usage: sh run_distribute_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
|
||||
|
||||
# standalone training
|
||||
Usage: sh run_standalone_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH]
|
||||
[PRETRAINED_CKPT_PATH](optional)
|
||||
|
||||
# run evaluation example
|
||||
Usage: sh run_eval.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
|
||||
```
|
||||
|
||||
- runing on GPU
|
||||
```
|
||||
# distributed training example
|
||||
sh run_distribute_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
|
||||
|
||||
# standalone training example
|
||||
sh run_standalone_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
|
||||
|
||||
# infer example
|
||||
sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
|
||||
```
|
||||
|
||||
# [Script Description](#contents)
|
||||
|
||||
## [Script and Sample Code](#contents)
|
||||
|
||||
```shell
|
||||
.
|
||||
|
@ -59,8 +146,7 @@ ImageNet2012
|
|||
└── train.py # train net
|
||||
```
|
||||
|
||||
|
||||
## Parameter configuration
|
||||
## [Script Parameters](#contents)
|
||||
|
||||
Parameters for both training and evaluation can be set in config.py.
|
||||
|
||||
|
@ -152,13 +238,10 @@ Parameters for both training and evaluation can be set in config.py.
|
|||
"lr_end": 0.0001, # end learning rate
|
||||
```
|
||||
|
||||
## [Training Process](#contents)
|
||||
|
||||
## Running the example
|
||||
|
||||
### Train
|
||||
|
||||
#### Usage
|
||||
|
||||
### Training
|
||||
- running on Ascend
|
||||
```
|
||||
# distributed training
|
||||
Usage: sh run_distribute_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
|
||||
|
@ -166,22 +249,16 @@ Usage: sh run_distribute_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imag
|
|||
# standalone training
|
||||
Usage: sh run_standalone_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH]
|
||||
[PRETRAINED_CKPT_PATH](optional)
|
||||
```
|
||||
|
||||
|
||||
#### Launch
|
||||
# run evaluation example
|
||||
Usage: sh run_eval.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
|
||||
|
||||
```
|
||||
# distribute training example
|
||||
sh run_distribute_train.sh resnet50 cifar10 rank_table.json ~/cifar-10-batches-bin
|
||||
For distributed training, a hccl configuration file with JSON format needs to be created in advance.
|
||||
|
||||
# standalone training example
|
||||
sh run_standalone_train.sh resnet50 cifar10 ~/cifar-10-batches-bin
|
||||
```
|
||||
Please follow the instructions in the link below:
|
||||
|
||||
> About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training.html).
|
||||
|
||||
#### Result
|
||||
https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools.
|
||||
|
||||
Training result will be stored in the example path, whose folder name begins with "train" or "train_parallel". Under this, you can find checkpoint file together with result like the followings in log.
|
||||
|
||||
|
@ -236,8 +313,25 @@ epoch: 4 step: 5004, loss is 3.5011306
|
|||
epoch: 5 step: 5004, loss is 3.3501816
|
||||
...
|
||||
```
|
||||
|
||||
- running on GPU
|
||||
|
||||
```
|
||||
# distributed training example
|
||||
sh run_distribute_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
|
||||
|
||||
# standalone training example
|
||||
sh run_standalone_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
|
||||
|
||||
# infer example
|
||||
sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
|
||||
```
|
||||
|
||||
## [Evaluation Process](#contents)
|
||||
|
||||
### Evaluation
|
||||
|
||||
- evaluation on CIFAR-10 dataset when running on Ascend
|
||||
#### Usage
|
||||
|
||||
```
|
||||
|
@ -245,8 +339,6 @@ epoch: 5 step: 5004, loss is 3.3501816
|
|||
Usage: sh run_eval.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
|
||||
```
|
||||
|
||||
#### Launch
|
||||
|
||||
```
|
||||
# evaluation example
|
||||
sh run_eval.sh resnet50 cifar10 ~/cifar10-10-verify-bin ~/resnet50_cifar10/train_parallel0/resnet-90_195.ckpt
|
||||
|
@ -280,27 +372,110 @@ result: {'top_5_accuracy': 0.9429417413572343, 'top_1_accuracy': 0.7853513124199
|
|||
|
||||
```
|
||||
result: {'top_5_accuracy': 0.9342589628681178, 'top_1_accuracy': 0.768065781049936} ckpt=train_parallel0/resnet-24_5004.ckpt
|
||||
```
|
||||
|
||||
```
|
||||
### Running on GPU
|
||||
```
|
||||
# distributed training example
|
||||
sh run_distribute_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
|
||||
|
||||
# standalone training example
|
||||
sh run_standalone_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
|
||||
|
||||
# infer example
|
||||
#### infer example
|
||||
sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
|
||||
```
|
||||
|
||||
### Running parameter server mode training
|
||||
```
|
||||
# parameter server training Ascend example
|
||||
### Parameter server training Ascend example
|
||||
```
|
||||
sh run_parameter_server_train.sh [resnet50|resnet101] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
|
||||
|
||||
# parameter server training GPU example
|
||||
### Parameter server training GPU example
|
||||
sh run_parameter_server_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
|
||||
|
||||
> The way to evaluate is the same as the examples above.
|
||||
```
|
||||
> The way to evaluate is the same as the examples above.
|
||||
|
||||
|
||||
# [Model Description](#contents)
|
||||
## [Performance](#contents)
|
||||
|
||||
### Evaluation Performance
|
||||
|
||||
#### ResNet50 on cifar10
|
||||
| Parameters | Ascend 910 | GPU |
|
||||
| -------------------------- | -------------------------------------- |---------------------------------- |
|
||||
| Model Version | ResNet50-v1.5 |ResNet50-v1.5|
|
||||
| Resource | Ascend 910,CPU 2.60GHz 56cores,Memory 314G | GPU,CPU 2.1GHz 24cores,Memory 128G
|
||||
| uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year)
|
||||
| MindSpore Version | 0.1.0-alpha |0.6.0-alpha |
|
||||
| Dataset | cifar10 | cifar10|
|
||||
| Training Parameters | epoch=90, steps per epoch=195, batch_size = 32 |epoch=90, steps per epoch=195, batch_size = 32 |
|
||||
| Optimizer | Momentum |Momentum|
|
||||
| Loss Function | Softmax Cross Entropy |Softmax Cross Entropy |
|
||||
| outputs | probability | probability |
|
||||
| Loss | 0.000356 | 0.000716 |
|
||||
| Speed | 18.4ms/step(8pcs) |69ms/step(8pcs)|
|
||||
| Total time | 6 mins | 20.2 mins|
|
||||
| Parameters (M) | 25.5 | 25.5 |
|
||||
| Checkpoint for Fine tuning | 179.7M (.ckpt file) |179.7M (.ckpt file)|
|
||||
| Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet |https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet |
|
||||
|
||||
#### ResNet50 on imagenet2012
|
||||
| Parameters | Ascend 910 | GPU |
|
||||
| -------------------------- | -------------------------------------- |---------------------------------- |
|
||||
| Model Version | ResNet50-v1.5 |ResNet50-v1.5|
|
||||
| Resource | Ascend 910,CPU 2.60GHz 56cores,Memory 314G | GPU,CPU 2.1GHz 24cores,Memory 128G
|
||||
| uploaded Date | 04/01/2020 (month/day/year) ; | 08/01/2020 (month/day/year)
|
||||
| MindSpore Version | 0.1.0-alpha |0.6.0-alpha |
|
||||
| Dataset | ImageNet2012 | ImageNet2012|
|
||||
| Training Parameters | epoch=90, steps per epoch=5004, batch_size = 32 |epoch=90, steps per epoch=5004, batch_size = 32 |
|
||||
| Optimizer | Momentum |Momentum|
|
||||
| Loss Function | Softmax Cross Entropy |Softmax Cross Entropy |
|
||||
| outputs | probability | probability |
|
||||
| Loss | 1.8464266 | 1.9023 |
|
||||
| Speed | 18.4ms/step(8pcs) |67.1ms/step(8pcs)|
|
||||
| Total time | 139 mins | 258 mins|
|
||||
| Parameters (M) | 25.5 | 25.5 |
|
||||
| Checkpoint for Fine tuning | 197M (.ckpt file) |197M (.ckpt file) |
|
||||
| Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet |https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet |
|
||||
|
||||
#### ResNet101 on imagenet2012
|
||||
| Parameters | Ascend 910 | GPU |
|
||||
| -------------------------- | -------------------------------------- |---------------------------------- |
|
||||
| Model Version | ResNet101 |ResNet101|
|
||||
| Resource | Ascend 910,CPU 2.60GHz 56cores,Memory 314G | GPU,CPU 2.1GHz 24cores,Memory 128G
|
||||
| uploaded Date | 04/01/2020 (month/day/year) | 08/14/2020 (month/day/year)
|
||||
| MindSpore Version | 0.1.0-alpha |0.6.0-alpha |
|
||||
| Dataset | ImageNet2012 | ImageNet2012|
|
||||
| Training Parameters | epoch=120, steps per epoch=5004, batch_size = 32 |epoch=120, steps per epoch=5004, batch_size = 32 |
|
||||
| Optimizer | Momentum |Momentum|
|
||||
| Loss Function | Softmax Cross Entropy |Softmax Cross Entropy |
|
||||
| outputs | probability | probability |
|
||||
| Loss | 1.6453942 | 1.7023412 |
|
||||
| Speed | 30.3ms/step(8pcs) |108.6ms/step(8pcs)|
|
||||
| Total time | 301 mins | 1100 mins|
|
||||
| Parameters (M) | 44.6 | 44.6 |
|
||||
| Checkpoint for Fine tuning | 343M (.ckpt file) |343M (.ckpt file) |
|
||||
| Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet |https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet |
|
||||
|
||||
#### SE-ResNet50 on imagenet2012
|
||||
|
||||
| Parameters | Ascend 910
|
||||
| -------------------------- | ------------------------------------------------------------------------ |
|
||||
| Model Version | SE-ResNet50 |
|
||||
| Resource | Ascend 910,CPU 2.60GHz 56cores,Memory 314G |
|
||||
| uploaded Date | 08/16/2020 (month/day/year) ; |
|
||||
| MindSpore Version | 0.6.0-alpha |
|
||||
| Dataset | ImageNet2012 |
|
||||
| Training Parameters | epoch=24, steps per epoch=5004, batch_size = 32 |
|
||||
| Optimizer | Momentum |
|
||||
| Loss Function | Softmax Cross Entropy |
|
||||
| outputs | probability |
|
||||
| Loss | 1.754404 |
|
||||
| Speed | 24.6ms/step(8pcs) |
|
||||
| Total time | 49.3 mins |
|
||||
| Parameters (M) | 25.5 |
|
||||
| Checkpoint for Fine tuning | 215.9M (.ckpt file) |
|
||||
| Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet |https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet |
|
||||
|
||||
# [Description of Random Situation](#contents)
|
||||
|
||||
In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py.
|
||||
|
||||
|
||||
# [ModelZoo Homepage](#contents)
|
||||
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
|
|
@ -16,6 +16,7 @@
|
|||
import os
|
||||
import random
|
||||
import argparse
|
||||
import ast
|
||||
import numpy as np
|
||||
from mindspore import context
|
||||
from mindspore import Tensor
|
||||
|
@ -35,13 +36,13 @@ from src.lr_generator import get_lr, warmup_cosine_annealing_lr
|
|||
parser = argparse.ArgumentParser(description='Image classification')
|
||||
parser.add_argument('--net', type=str, default=None, help='Resnet Model, either resnet50 or resnet101')
|
||||
parser.add_argument('--dataset', type=str, default=None, help='Dataset, either cifar10 or imagenet2012')
|
||||
parser.add_argument('--run_distribute', type=bool, default=False, help='Run distribute')
|
||||
parser.add_argument('--run_distribute', type=ast.literal_eval, default=False, help='Run distribute')
|
||||
parser.add_argument('--device_num', type=int, default=1, help='Device num.')
|
||||
|
||||
parser.add_argument('--dataset_path', type=str, default=None, help='Dataset path')
|
||||
parser.add_argument('--device_target', type=str, default='Ascend', help='Device target')
|
||||
parser.add_argument('--pre_trained', type=str, default=None, help='Pretrained checkpoint path')
|
||||
parser.add_argument('--parameter_server', type=bool, default=False, help='Run parameter server train')
|
||||
parser.add_argument('--parameter_server', type=ast.literal_eval, default=False, help='Run parameter server train')
|
||||
args_opt = parser.parse_args()
|
||||
|
||||
random.seed(1)
|
||||
|
|
Loading…
Reference in New Issue