forked from mindspore-Ecosystem/mindspore
!5175 ReadMe file normalize
Merge pull request !5175 from chenfei_mindspore/fix-readme
This commit is contained in:
commit
b178c0ccd1
|
@ -1,189 +1,183 @@
|
|||
# LeNet Quantization Aware Training
|
||||
# Contents
|
||||
|
||||
## Description
|
||||
|
||||
Training LeNet with MNIST dataset in MindSpore with quantization aware training.
|
||||
|
||||
This is the simple and basic tutorial for constructing a network in MindSpore with quantization aware.
|
||||
|
||||
In this tutorial, you will:
|
||||
|
||||
1. Train a MindSpore fusion model for MNIST from scratch using `nn.Conv2dBnAct` and `nn.DenseBnAct`.
|
||||
2. Fine tune the fusion model by applying the quantization aware training auto network converter API `convert_quant_network`, after the network convergence then export a quantization aware model checkpoint file.
|
||||
3. Use the quantization aware model to create an actually quantized model for the Ascend inference backend.
|
||||
4. See the persistence of accuracy in inference backend and a 4x smaller model. To see the latency benefits on mobile, try out the Ascend inference backend examples.
|
||||
- [LeNet Description](#lenet-description)
|
||||
- [Model Architecture](#model-architecture)
|
||||
- [Dataset](#dataset)
|
||||
- [Environment Requirements](#environment-requirements)
|
||||
- [Quick Start](#quick-start)
|
||||
- [Script Description](#script-description)
|
||||
- [Script and Sample Code](#script-and-sample-code)
|
||||
- [Script Parameters](#script-parameters)
|
||||
- [Training Process](#training-process)
|
||||
- [Training](#training)
|
||||
- [Evaluation Process](#evaluation-process)
|
||||
- [Evaluation](#evaluation)
|
||||
- [Model Description](#model-description)
|
||||
- [Performance](#performance)
|
||||
- [Evaluation Performance](#evaluation-performance)
|
||||
- [ModelZoo Homepage](#modelzoo-homepage)
|
||||
|
||||
|
||||
## Train fusion model
|
||||
# [LeNet Description](#contents)
|
||||
|
||||
### Install
|
||||
LeNet was proposed in 1998, a typical convolutional neural network. It was used for digit recognition and got big success.
|
||||
|
||||
Install MindSpore base on the ascend device and GPU device from [MindSpore](https://www.mindspore.cn/install/en).
|
||||
[Paper](https://ieeexplore.ieee.org/document/726791): Y.Lecun, L.Bottou, Y.Bengio, P.Haffner. Gradient-Based Learning Applied to Document Recognition. *Proceedings of the IEEE*. 1998.
|
||||
|
||||
This is the quantitative network of LeNet.
|
||||
|
||||
```python
|
||||
pip uninstall -y mindspore-ascend
|
||||
pip uninstall -y mindspore-gpu
|
||||
pip install mindspore-ascend.whl
|
||||
```
|
||||
# [Model Architecture](#contents)
|
||||
|
||||
Then you will get the following display
|
||||
LeNet is very simple, which contains 5 layers. The layer composition consists of 2 convolutional layers and 3 fully connected layers.
|
||||
|
||||
# [Dataset](#contents)
|
||||
|
||||
```bash
|
||||
>>> Found existing installation: mindspore-ascend
|
||||
>>> Uninstalling mindspore-ascend:
|
||||
>>> Successfully uninstalled mindspore-ascend.
|
||||
```
|
||||
Dataset used: [MNIST](<http://yann.lecun.com/exdb/mnist/>)
|
||||
|
||||
### Prepare Dataset
|
||||
- Dataset size 52.4M 60,000 28*28 in 10 classes
|
||||
- Train 60,000 images
|
||||
- Test 10,000 images
|
||||
- Data format binary files
|
||||
- Note Data will be processed in dataset.py
|
||||
|
||||
Download the MNIST dataset, the directory structure is as follows:
|
||||
- The directory structure is as follows:
|
||||
|
||||
```
|
||||
└─MNIST_Data
|
||||
└─Data
|
||||
├─test
|
||||
│ t10k-images.idx3-ubyte
|
||||
│ t10k-labels.idx1-ubyte
|
||||
│
|
||||
└─train
|
||||
train-images.idx3-ubyte
|
||||
train-labels.idx1-ubyte
|
||||
```
|
||||
|
||||
### Define fusion model
|
||||
# [Environment Requirements](#contents)
|
||||
|
||||
Define a MindSpore fusion model using `nn.Conv2dBnAct` and `nn.DenseBnAct`.
|
||||
- Hardware:Ascend
|
||||
- Prepare hardware environment with Ascend
|
||||
- Framework
|
||||
- [MindSpore](http://10.90.67.50/mindspore/archive/20200506/OpenSource/me_vm_x86/)
|
||||
- For more information, please check the resources below:
|
||||
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
|
||||
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
|
||||
|
||||
```Python
|
||||
class LeNet5(nn.Cell):
|
||||
"""
|
||||
Define Lenet fusion model
|
||||
"""
|
||||
# [Quick Start](#contents)
|
||||
|
||||
def __init__(self, num_class=10, channel=1):
|
||||
super(LeNet5, self).__init__()
|
||||
self.num_class = num_class
|
||||
|
||||
# change `nn.Conv2d` to `nn.Conv2dBnAct`
|
||||
self.conv1 = nn.Conv2dBnAct(channel, 6, 5, activation='relu')
|
||||
self.conv2 = nn.Conv2dBnAct(6, 16, 5, activation='relu')
|
||||
# change `nn.Dense` to `nn.DenseBnAct`
|
||||
self.fc1 = nn.DenseBnAct(16 * 5 * 5, 120, activation='relu')
|
||||
self.fc2 = nn.DenseBnAct(120, 84, activation='relu')
|
||||
self.fc3 = nn.DenseBnAct(84, self.num_class)
|
||||
|
||||
self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
|
||||
self.flatten = nn.Flatten()
|
||||
|
||||
def construct(self, x):
|
||||
x = self.conv1(x)
|
||||
x = self.max_pool2d(x)
|
||||
x = self.conv2(x)
|
||||
x = self.max_pool2d(x)
|
||||
x = self.flatten(x)
|
||||
x = self.fc1(x)
|
||||
x = self.fc2(x)
|
||||
x = self.fc3(x)
|
||||
return x
|
||||
```
|
||||
|
||||
Get the MNIST from scratch dataset.
|
||||
|
||||
```Python
|
||||
ds_train = create_dataset(os.path.join(args.data_path, "train"),
|
||||
cfg.batch_size, cfg.epoch_size)
|
||||
step_size = ds_train.get_dataset_size()
|
||||
|
||||
## Train quantization aware model
|
||||
|
||||
### Define quantization aware model
|
||||
|
||||
You will apply quantization aware training to the whole model and the layers of "fake quant op" are insert into the whole model. All layers are now perpare by "fake quant op".
|
||||
|
||||
Note that the resulting model is quantization aware but not quantized (e.g. the weights are float32 instead of int8).
|
||||
After installing MindSpore via the official website, you can start training and evaluation as follows:
|
||||
|
||||
```python
|
||||
# define funsion network
|
||||
network = LeNet5Fusion(cfg.num_classes)
|
||||
|
||||
# load quantization aware network checkpoint
|
||||
param_dict = load_checkpoint(args.ckpt_path)
|
||||
load_param_into_net(network, param_dict)
|
||||
|
||||
# convert funsion netwrok to quantization aware network
|
||||
network = quant.convert_quant_network(network)
|
||||
# enter ../lenet directory and train lenet network,then a '.ckpt' file will be generated.
|
||||
sh run_standalone_train_ascend.sh [DATA_PATH]
|
||||
# enter lenet dir, train LeNet-Quant
|
||||
python train.py --device_target=Ascend --data_path=[DATA_PATH] --ckpt_path=[CKPT_PATH] --dataset_sink_mode=True
|
||||
#evaluate LeNet-Quant
|
||||
python eval.py --device_target=Ascend --data_path=[DATA_PATH] --ckpt_path=[CKPT_PATH] --dataset_sink_mode=True
|
||||
```
|
||||
|
||||
### load checkpoint
|
||||
# [Script Description](#contents)
|
||||
|
||||
After convert to quantization aware network, we can load the checkpoint file.
|
||||
## [Script and Sample Code](#contents)
|
||||
|
||||
```
|
||||
├── model_zoo
|
||||
├── README.md // descriptions about all the models
|
||||
├── lenet_quant
|
||||
├── README.md // descriptions about LeNet-Quant
|
||||
├── src
|
||||
│ ├── config.py // parameter configuration
|
||||
│ ├── dataset.py // creating dataset
|
||||
│ ├── lenet_fusion.py // auto constructed quantitative network model of LeNet-Quant
|
||||
│ ├── lenet_quant.py // manual constructed quantitative network model of LeNet-Quant
|
||||
│ ├── loss_monitor.py //monitor of network's loss and other data
|
||||
├── requirements.txt // package needed
|
||||
├── train.py // training LeNet-Quant network with device Ascend
|
||||
├── eval.py // evaluating LeNet-Quant network with device Ascend
|
||||
```
|
||||
|
||||
## [Script Parameters](#contents)
|
||||
|
||||
```python
|
||||
config_ck = CheckpointConfig(save_checkpoint_steps=cfg.epoch_size * step_size,
|
||||
keep_checkpoint_max=cfg.keep_checkpoint_max)
|
||||
ckpoint_cb = ModelCheckpoint(prefix="checkpoint_lenet", config=config_ck)
|
||||
model = Model(network, net_loss, net_opt, metrics={"Accuracy": Accuracy()})
|
||||
Major parameters in train.py and config.py as follows:
|
||||
|
||||
--data_path: The absolute full path to the train and evaluation datasets.
|
||||
--epoch_size: Total training epochs.
|
||||
--batch_size: Training batch size.
|
||||
--image_height: Image height used as input to the model.
|
||||
--image_width: Image width used as input the model.
|
||||
--device_target: Device where the code will be implemented. Optional values
|
||||
are "Ascend", "GPU", "CPU".Only "Ascend" is supported now.
|
||||
--ckpt_path: The absolute full path to the checkpoint file saved
|
||||
after training.
|
||||
--data_path: Path where the dataset is saved
|
||||
```
|
||||
|
||||
### train quantization aware model
|
||||
## [Training Process](#contents)
|
||||
|
||||
Also, you can just run this command instead.
|
||||
### Training
|
||||
|
||||
```python
|
||||
python train_quant.py --data_path MNIST_Data --device_target Ascend --ckpt_path checkpoint_lenet.ckpt
|
||||
```
|
||||
python train.py --device_target=Ascend --dataset_path=/home/datasets/MNIST --dataset_sink_mode=True > log.txt 2>&1 &
|
||||
```
|
||||
|
||||
After all the following we will get the loss value of each step as following:
|
||||
After training, the loss value will be achieved as follows:
|
||||
|
||||
```bash
|
||||
>>> Epoch: [ 1/ 10] step: [ 1/ 900], loss: [2.3040/2.5234], time: [1.300234]
|
||||
>>> ...
|
||||
>>> Epoch: [ 9/ 10] step: [887/ 900], loss: [0.0113/0.0223], time: [1.300234]
|
||||
>>> Epoch: [ 9/ 10] step: [888/ 900], loss: [0.0334/0.0223], time: [1.300234]
|
||||
>>> Epoch: [ 9/ 10] step: [889/ 900], loss: [0.0233/0.0223], time: [1.300234]
|
||||
```
|
||||
# grep "Epoch " log.txt
|
||||
Epoch: [ 1/ 10], step: [ 937/ 937], loss: [0.0081], avg loss: [0.0081], time: [11268.6832ms]
|
||||
Epoch time: 11269.352, per step time: 12.027, avg loss: 0.008
|
||||
Epoch: [ 2/ 10], step: [ 937/ 937], loss: [0.0496], avg loss: [0.0496], time: [3085.2389ms]
|
||||
Epoch time: 3085.641, per step time: 3.293, avg loss: 0.050
|
||||
Epoch: [ 3/ 10], step: [ 937/ 937], loss: [0.0017], avg loss: [0.0017], time: [3085.3510ms]
|
||||
...
|
||||
...
|
||||
```
|
||||
|
||||
### Evaluate quantization aware model
|
||||
The model checkpoint will be saved in the current directory.
|
||||
|
||||
Procedure of quantization aware model evaluation is different from normal. Because the checkpoint was create by quantization aware model, so we need to load fusion model checkpoint before convert fusion model to quantization aware model.
|
||||
## [Evaluation Process](#contents)
|
||||
|
||||
```python
|
||||
# define funsion network
|
||||
network = LeNet5Fusion(cfg.num_classes)
|
||||
### Evaluation
|
||||
|
||||
# load quantization aware network checkpoint
|
||||
param_dict = load_checkpoint(args.ckpt_path)
|
||||
load_param_into_net(network, param_dict)
|
||||
Before running the command below, please check the checkpoint path used for evaluation.
|
||||
|
||||
# convert funsion netwrok to quantization aware network
|
||||
network = quant.convert_quant_network(network)
|
||||
```
|
||||
python eval.py --data_path Data --ckpt_path ckpt/checkpoint_lenet-1_937.ckpt > log.txt 2>&1 &
|
||||
```
|
||||
|
||||
Also, you can just run this command insread.
|
||||
You can view the results through the file "log.txt". The accuracy of the test dataset will be as follows:
|
||||
|
||||
```python
|
||||
python eval_quant.py --data_path MNIST_Data --device_target Ascend --ckpt_path checkpoint_lenet.ckpt
|
||||
```
|
||||
# grep "Accuracy: " log.txt
|
||||
'Accuracy': 0.9842
|
||||
```
|
||||
|
||||
The top1 accuracy would display on shell.
|
||||
# [Model Description](#contents)
|
||||
|
||||
```bash
|
||||
>>> Accuracy: 98.54.
|
||||
```
|
||||
## [Performance](#contents)
|
||||
|
||||
## Note
|
||||
### Evaluation Performance
|
||||
|
||||
Here are some optional parameters:
|
||||
| Parameters | LeNet |
|
||||
| -------------------------- | ----------------------------------------------------------- |
|
||||
| Resource | Ascend 910 CPU 2.60GHz 56cores Memory 314G |
|
||||
| uploaded Date | 06/09/2020 (month/day/year) |
|
||||
| MindSpore Version | 0.5.0-beta |
|
||||
| Dataset | MNIST |
|
||||
| Training Parameters | epoch=10, steps=937, batch_size = 64, lr=0.01 |
|
||||
| Optimizer | Momentum |
|
||||
| Loss Function | Softmax Cross Entropy |
|
||||
| outputs | probability |
|
||||
| Loss | 0.002 |
|
||||
| Speed |3.29 ms/step |
|
||||
| Total time | 40s |
|
||||
| Checkpoint for Fine tuning | 482k (.ckpt file) |
|
||||
| Scripts | [scripts](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/lenet) |
|
||||
|
||||
```bash
|
||||
--device_target {Ascend,GPU}
|
||||
device where the code will be implemented (default: Ascend)
|
||||
--data_path DATA_PATH
|
||||
path where the dataset is saved
|
||||
--dataset_sink_mode DATASET_SINK_MODE
|
||||
dataset_sink_mode is False or True
|
||||
```
|
||||
# [Description of Random Situation](#contents)
|
||||
|
||||
You can run ```python train.py -h``` or ```python eval.py -h``` to get more information.
|
||||
In dataset.py, we set the seed inside “create_dataset" function.
|
||||
|
||||
We encourage you to try this new capability, which can be particularly important for deployment in resource-constrained environments.
|
||||
# [ModelZoo Homepage](#contents)
|
||||
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
|
||||
|
|
|
@ -1,100 +1,187 @@
|
|||
# MobileNetV2 Quantization Aware Training
|
||||
# Contents
|
||||
|
||||
MobileNetV2 is a significant improvement over MobileNetV1 and pushes the state of the art for mobile visual recognition including classification, object detection and semantic segmentation.
|
||||
- [MobileNetV2 Description](#mobilenetv2-description)
|
||||
- [Model Architecture](#model-architecture)
|
||||
- [Dataset](#dataset)
|
||||
- [Features](#features)
|
||||
- [Mixed Precision](#mixed-precision)
|
||||
- [Environment Requirements](#environment-requirements)
|
||||
- [Script Description](#script-description)
|
||||
- [Script and Sample Code](#script-and-sample-code)
|
||||
- [Training Process](#training-process)
|
||||
- [Evaluation Process](#evaluation-process)
|
||||
- [Evaluation](#evaluation)
|
||||
- [Model Description](#model-description)
|
||||
- [Performance](#performance)
|
||||
- [Training Performance](#evaluation-performance)
|
||||
- [Inference Performance](#evaluation-performance)
|
||||
- [Description of Random Situation](#description-of-random-situation)
|
||||
- [ModelZoo Homepage](#modelzoo-homepage)
|
||||
|
||||
MobileNetV2 builds upon the ideas from MobileNetV1, using depthwise separable convolution as efficient building blocks. However, V2 introduces two new features to the architecture: 1) linear bottlenecks between the layers, and 2) shortcut connections between the bottlenecks1.
|
||||
# [MobileNetV2 Description](#contents)
|
||||
|
||||
Training MobileNetV2 with ImageNet dataset in MindSpore with quantization aware training.
|
||||
|
||||
This is the simple and basic tutorial for constructing a network in MindSpore with quantization aware.
|
||||
MobileNetV2 is tuned to mobile phone CPUs through a combination of hardware- aware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances.Nov 20, 2019.
|
||||
|
||||
In this readme tutorial, you will:
|
||||
[Paper](https://arxiv.org/pdf/1905.02244) Howard, Andrew, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang et al. "Searching for MobileNetV2." In Proceedings of the IEEE International Conference on Computer Vision, pp. 1314-1324. 2019.
|
||||
|
||||
1. Train a MindSpore fusion MobileNetV2 model for ImageNet from scratch using `nn.Conv2dBnAct` and `nn.DenseBnAct`.
|
||||
2. Fine tune the fusion model by applying the quantization aware training auto network converter API `convert_quant_network`, after the network convergence then export a quantization aware model checkpoint file.
|
||||
This is the quantitative network of MobileNetV2.
|
||||
|
||||
[Paper](https://arxiv.org/pdf/1801.04381) Sandler, Mark, et al. "Mobilenetv2: Inverted residuals and linear bottlenecks." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.
|
||||
# [Model architecture](#contents)
|
||||
|
||||
# Dataset
|
||||
The overall network architecture of MobileNetV2 is show below:
|
||||
|
||||
Dataset use: ImageNet
|
||||
[Link](https://arxiv.org/pdf/1905.02244)
|
||||
|
||||
- Dataset size: about 125G
|
||||
- Train: 120G, 1281167 images: 1000 directories
|
||||
- Test: 5G, 50000 images: images should be classified into 1000 directories firstly, just like train images
|
||||
# [Dataset](#contents)
|
||||
|
||||
Dataset used: [imagenet](http://www.image-net.org/)
|
||||
|
||||
- Dataset size: ~125G, 1.2W colorful images in 1000 classes
|
||||
- Train: 120G, 1.2W images
|
||||
- Test: 5G, 50000 images
|
||||
- Data format: RGB images.
|
||||
- Note: Data will be processed in src/dataset.py
|
||||
|
||||
# Environment Requirements
|
||||
|
||||
- Hardware(Ascend)
|
||||
- Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
|
||||
# [Features](#contents)
|
||||
|
||||
## [Mixed Precision(Ascend)](#contents)
|
||||
|
||||
The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
|
||||
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
|
||||
|
||||
# [Environment Requirements](#contents)
|
||||
|
||||
- Hardware:Ascend
|
||||
- Prepare hardware environment with Ascend. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
|
||||
- Framework
|
||||
- [MindSpore](http://10.90.67.50/mindspore/archive/20200506/OpenSource/me_vm_x86/)
|
||||
- For more information, please check the resources below:
|
||||
- For more information, please check the resources below
|
||||
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
|
||||
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
|
||||
|
||||
|
||||
# Script description
|
||||
# [Script description](#contents)
|
||||
|
||||
## Script and sample code
|
||||
## [Script and sample code](#contents)
|
||||
|
||||
```python
|
||||
├── mobilenetv2_quant
|
||||
├── Readme.md
|
||||
├── mobileNetv2_quant
|
||||
├── Readme.md # descriptions about MobileNetV2-Quant
|
||||
├── scripts
|
||||
│ ├──run_train_quant.sh
|
||||
│ ├──run_infer_quant.sh
|
||||
│ ├──run_train_quant.sh # shell script for train on Ascend
|
||||
│ ├──run_infer_quant.sh # shell script for evaluation on Ascend
|
||||
├── src
|
||||
│ ├──config.py
|
||||
│ ├──dataset.py
|
||||
│ ├──luanch.py
|
||||
│ ├──lr_generator.py
|
||||
│ ├──mobilenetV2.py
|
||||
├── train.py
|
||||
├── eval.py
|
||||
|
||||
│ ├──config.py # parameter configuration
|
||||
│ ├──dataset.py # creating dataset
|
||||
│ ├──launch.py # start python script
|
||||
│ ├──lr_generator.py # learning rate config
|
||||
│ ├──mobilenetV2.py # MobileNetV2 architecture
|
||||
│ ├──utils.py # supply the monitor module
|
||||
├── train.py # training script
|
||||
├── eval.py # evaluation script
|
||||
├── export.py # export checkpoint files into air/onnx
|
||||
```
|
||||
|
||||
### Fine-tune for quantization aware training
|
||||
## [Training process](#contents)
|
||||
|
||||
Fine tune the fusion model by applying the quantization aware training auto network converter API `convert_quant_network`, after the network convergence then export a quantization aware model checkpoint file.
|
||||
### Usage
|
||||
|
||||
- sh run_train_quant.sh Ascend [DEVICE_NUM] [SERVER_IP(x.x.x.x)] [VISIABLE_DEVICES(0,1,2,3,4,5,6,7)] [DATASET_PATH] [CKPT_PATH]
|
||||
|
||||
You can just run this command instead.
|
||||
You can start training using python or shell scripts. The usage of shell scripts as follows:
|
||||
|
||||
- Ascend: sh run_train_quant.sh Ascend [DEVICE_NUM] [VISIABLE_DEVICES(0,1,2,3,4,5,6,7)] [RANK_TABLE_FILE] [DATASET_PATH] [CKPT_PATH]
|
||||
|
||||
### Launch
|
||||
|
||||
``` bash
|
||||
>>> sh run_train_quant.sh Ascend 4 192.168.0.1 0,1,2,3 ~/imagenet/train/ ~/mobilenet.ckpt
|
||||
```
|
||||
# training example
|
||||
shell:
|
||||
Ascend: sh run_train_quant.sh Ascend 8 10.222.223.224 0,1,2,3,4,5,6,7 ~/imagenet/train/ mobilenet_199.ckpt
|
||||
```
|
||||
|
||||
### Result
|
||||
|
||||
Training result will be stored in the example path. Checkpoints will be stored at `. /checkpoint` by default, and training log will be redirected to `./train/train.log` like followings.
|
||||
|
||||
```
|
||||
>>> epoch: [ 0/60], step:[ 624/ 625], loss:[5.258/5.258], time:[140412.236], lr:[0.100]
|
||||
>>> epoch time: 140522.500, per step time: 224.836, avg loss: 5.258
|
||||
>>> epoch: [ 1/60], step:[ 624/ 625], loss:[3.917/3.917], time:[138221.250], lr:[0.200]
|
||||
>>> epoch time: 138331.250, per step time: 221.330, avg loss: 3.917
|
||||
epoch: [ 0/200], step:[ 624/ 625], loss:[5.258/5.258], time:[140412.236], lr:[0.100]
|
||||
epoch time: 140522.500, per step time: 224.836, avg loss: 5.258
|
||||
epoch: [ 1/200], step:[ 624/ 625], loss:[3.917/3.917], time:[138221.250], lr:[0.200]
|
||||
epoch time: 138331.250, per step time: 221.330, avg loss: 3.917
|
||||
```
|
||||
|
||||
### Evaluate quantization aware training model
|
||||
## [Eval process](#contents)
|
||||
|
||||
Evaluate a MindSpore fusion MobileNetV2 model for ImageNet by applying the quantization aware training, like:
|
||||
### Usage
|
||||
|
||||
- sh run_infer_quant.sh Ascend [DATASET_PATH] [CHECKPOINT_PATH]
|
||||
You can start training using python or shell scripts. The usage of shell scripts as follows:
|
||||
|
||||
You can just run this command instead.
|
||||
- Ascend: sh run_infer_quant.sh Ascend [DATASET_PATH] [CHECKPOINT_PATH]
|
||||
|
||||
``` bash
|
||||
>>> sh run_infer_quant.sh Ascend ~/imagenet/val/ ~/train/mobilenet-60_625.ckpt
|
||||
```
|
||||
|
||||
Inference result will be stored in the example path, you can find result like the followings in `val.log`.
|
||||
### Launch
|
||||
|
||||
```
|
||||
>>> result: {'acc': 0.71976314102564111} ckpt=/path/to/checkpoint/mobilenet-60_625.ckpt
|
||||
# infer example
|
||||
shell:
|
||||
Ascend: sh run_infer_quant.sh Ascend ~/imagenet/val/ ~/train/mobilenet-60_1601.ckpt
|
||||
```
|
||||
|
||||
# ModelZoo Homepage
|
||||
[Link](https://gitee.com/mindspore/mindspore/tree/master/mindspore/model_zoo)
|
||||
> checkpoint can be produced in training process.
|
||||
|
||||
### Result
|
||||
|
||||
Inference result will be stored in the example path, you can find result like the followings in `./val/infer.log`.
|
||||
|
||||
```
|
||||
result: {'acc': 0.71976314102564111}
|
||||
```
|
||||
|
||||
# [Model description](#contents)
|
||||
|
||||
## [Performance](#contents)
|
||||
|
||||
### Training Performance
|
||||
|
||||
| Parameters | MobilenetV2 |
|
||||
| -------------------------- | ---------------------------------------------------------- |
|
||||
| Model Version | V2 |
|
||||
| Resource | Ascend 910, cpu:2.60GHz 56cores, memory:314G |
|
||||
| uploaded Date | 06/06/2020 |
|
||||
| MindSpore Version | 0.3.0 |
|
||||
| Dataset | ImageNet |
|
||||
| Training Parameters | src/config.py |
|
||||
| Optimizer | Momentum |
|
||||
| Loss Function | SoftmaxCrossEntropy |
|
||||
| outputs | ckpt file |
|
||||
| Loss | 1.913 |
|
||||
| Accuracy | |
|
||||
| Total time | 16h |
|
||||
| Params (M) | batch_size=192, epoch=60 |
|
||||
| Checkpoint for Fine tuning | |
|
||||
| Model for inference | |
|
||||
|
||||
#### Evaluation Performance
|
||||
|
||||
| Parameters | |
|
||||
| -------------------------- | ----------------------------- |
|
||||
| Model Version | V2 |
|
||||
| Resource | Ascend 910 |
|
||||
| uploaded Date | 06/06/2020 |
|
||||
| MindSpore Version | 0.3.0 |
|
||||
| Dataset | ImageNet, 1.2W |
|
||||
| batch_size | 130(8P) |
|
||||
| outputs | probability |
|
||||
| Accuracy | ACC1[71.78%] ACC5[90.90%] |
|
||||
| Speed | 200ms/step |
|
||||
| Total time | 5min |
|
||||
| Model for inference | |
|
||||
|
||||
# [Description of Random Situation](#contents)
|
||||
|
||||
In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py.
|
||||
|
||||
# [ModelZoo Homepage](#contents)
|
||||
|
||||
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
|
||||
|
|
|
@ -1,218 +0,0 @@
|
|||
# Copyright 2020 Huawei Technologies Co., Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ============================================================================
|
||||
"""MobileNetV2 Quant model define"""
|
||||
|
||||
import mindspore.nn as nn
|
||||
from mindspore.ops import operations as P
|
||||
|
||||
__all__ = ['mobilenetV2_quant']
|
||||
|
||||
_quant_delay = 200
|
||||
_ema_decay = 0.999
|
||||
_symmetric = False
|
||||
_per_channel = False
|
||||
|
||||
|
||||
def _make_divisible(v, divisor, min_value=None):
|
||||
if min_value is None:
|
||||
min_value = divisor
|
||||
new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)
|
||||
# Make sure that round down does not go down by more than 10%.
|
||||
if new_v < 0.9 * v:
|
||||
new_v += divisor
|
||||
return new_v
|
||||
|
||||
|
||||
class GlobalAvgPooling(nn.Cell):
|
||||
"""
|
||||
Global avg pooling definition.
|
||||
|
||||
Args:
|
||||
|
||||
Returns:
|
||||
Tensor, output tensor.
|
||||
|
||||
Examples:
|
||||
>>> GlobalAvgPooling()
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
super(GlobalAvgPooling, self).__init__()
|
||||
self.mean = P.ReduceMean(keep_dims=False)
|
||||
|
||||
def construct(self, x):
|
||||
x = self.mean(x, (2, 3))
|
||||
return x
|
||||
|
||||
|
||||
class ConvBNReLU(nn.Cell):
|
||||
"""
|
||||
Convolution/Depthwise fused with Batchnorm and ReLU block definition.
|
||||
|
||||
Args:
|
||||
in_planes (int): Input channel.
|
||||
out_planes (int): Output channel.
|
||||
kernel_size (int): Input kernel size.
|
||||
stride (int): Stride size for the first convolutional layer. Default: 1.
|
||||
groups (int): channel group. Convolution is 1 while Depthiwse is input channel. Default: 1.
|
||||
|
||||
Returns:
|
||||
Tensor, output tensor.
|
||||
|
||||
Examples:
|
||||
>>> ConvBNReLU(16, 256, kernel_size=1, stride=1, groups=1)
|
||||
"""
|
||||
|
||||
def __init__(self, in_planes, out_planes, kernel_size=3, stride=1, groups=1):
|
||||
super(ConvBNReLU, self).__init__()
|
||||
padding = (kernel_size - 1) // 2
|
||||
conv = nn.Conv2dBnFoldQuant(in_planes, out_planes, kernel_size, stride,
|
||||
pad_mode='pad', padding=padding, quant_delay=_quant_delay, group=groups,
|
||||
per_channel=_per_channel, symmetric=_symmetric)
|
||||
layers = [conv, nn.ReLU()]
|
||||
self.features = nn.SequentialCell(layers)
|
||||
self.fake = nn.FakeQuantWithMinMax(ema=True, ema_decay=_ema_decay, min_init=0, quant_delay=_quant_delay)
|
||||
|
||||
def construct(self, x):
|
||||
output = self.features(x)
|
||||
output = self.fake(output)
|
||||
return output
|
||||
|
||||
|
||||
class InvertedResidual(nn.Cell):
|
||||
"""
|
||||
Mobilenetv2 residual block definition.
|
||||
|
||||
Args:
|
||||
inp (int): Input channel.
|
||||
oup (int): Output channel.
|
||||
stride (int): Stride size for the first convolutional layer. Default: 1.
|
||||
expand_ratio (int): expand ration of input channel
|
||||
|
||||
Returns:
|
||||
Tensor, output tensor.
|
||||
|
||||
Examples:
|
||||
>>> ResidualBlock(3, 256, 1, 1)
|
||||
"""
|
||||
|
||||
def __init__(self, inp, oup, stride, expand_ratio):
|
||||
super(InvertedResidual, self).__init__()
|
||||
assert stride in [1, 2]
|
||||
|
||||
hidden_dim = int(round(inp * expand_ratio))
|
||||
self.use_res_connect = stride == 1 and inp == oup
|
||||
|
||||
layers = []
|
||||
if expand_ratio != 1:
|
||||
layers.append(ConvBNReLU(inp, hidden_dim, kernel_size=1))
|
||||
layers.extend([
|
||||
# dw
|
||||
ConvBNReLU(hidden_dim, hidden_dim, stride=stride, groups=hidden_dim),
|
||||
# pw-linear
|
||||
nn.Conv2dBnFoldQuant(hidden_dim, oup, kernel_size=1, stride=1, pad_mode='pad', padding=0, group=1,
|
||||
per_channel=_per_channel, symmetric=_symmetric, quant_delay=_quant_delay),
|
||||
nn.FakeQuantWithMinMax(ema=True, ema_decay=_ema_decay, quant_delay=_quant_delay)
|
||||
])
|
||||
self.conv = nn.SequentialCell(layers)
|
||||
self.add = P.TensorAdd()
|
||||
self.add_fake = nn.FakeQuantWithMinMax(ema=True, ema_decay=_ema_decay, quant_delay=_quant_delay)
|
||||
|
||||
def construct(self, x):
|
||||
identity = x
|
||||
x = self.conv(x)
|
||||
if self.use_res_connect:
|
||||
x = self.add(identity, x)
|
||||
x = self.add_fake(x)
|
||||
return x
|
||||
|
||||
|
||||
class MobileNetV2Quant(nn.Cell):
|
||||
"""
|
||||
MobileNetV2Quant architecture.
|
||||
|
||||
Args:
|
||||
class_num (Cell): number of classes.
|
||||
width_mult (int): Channels multiplier for round to 8/16 and others. Default is 1.
|
||||
has_dropout (bool): Is dropout used. Default is false
|
||||
inverted_residual_setting (list): Inverted residual settings. Default is None
|
||||
round_nearest (list): Channel round to . Default is 8
|
||||
Returns:
|
||||
Tensor, output tensor.
|
||||
|
||||
Examples:
|
||||
>>> MobileNetV2Quant(num_classes=1000)
|
||||
"""
|
||||
|
||||
def __init__(self, num_classes=1000, width_mult=1.,
|
||||
has_dropout=False, inverted_residual_setting=None, round_nearest=8):
|
||||
super(MobileNetV2Quant, self).__init__()
|
||||
block = InvertedResidual
|
||||
input_channel = 32
|
||||
last_channel = 1280
|
||||
# setting of inverted residual blocks
|
||||
self.cfgs = inverted_residual_setting
|
||||
if inverted_residual_setting is None:
|
||||
self.cfgs = [
|
||||
# t, c, n, s
|
||||
[1, 16, 1, 1],
|
||||
[6, 24, 2, 2],
|
||||
[6, 32, 3, 2],
|
||||
[6, 64, 4, 2],
|
||||
[6, 96, 3, 1],
|
||||
[6, 160, 3, 2],
|
||||
[6, 320, 1, 1],
|
||||
]
|
||||
|
||||
# building first layer
|
||||
input_channel = _make_divisible(input_channel * width_mult, round_nearest)
|
||||
self.out_channels = _make_divisible(last_channel * max(1.0, width_mult), round_nearest)
|
||||
self.input_fake = nn.FakeQuantWithMinMax(ema=True, ema_decay=_ema_decay, quant_delay=_quant_delay)
|
||||
features = [ConvBNReLU(3, input_channel, stride=2)]
|
||||
# building inverted residual blocks
|
||||
for t, c, n, s in self.cfgs:
|
||||
output_channel = _make_divisible(c * width_mult, round_nearest)
|
||||
for i in range(n):
|
||||
stride = s if i == 0 else 1
|
||||
features.append(block(input_channel, output_channel, stride, expand_ratio=t))
|
||||
input_channel = output_channel
|
||||
# building last several layers
|
||||
features.append(ConvBNReLU(input_channel, self.out_channels, kernel_size=1))
|
||||
# make it nn.CellList
|
||||
self.features = nn.SequentialCell(features)
|
||||
# mobilenet head
|
||||
head = ([GlobalAvgPooling(),
|
||||
nn.DenseQuant(self.out_channels, num_classes, has_bias=True, per_channel=_per_channel,
|
||||
symmetric=_symmetric, quant_delay=_quant_delay),
|
||||
nn.FakeQuantWithMinMax(ema=True, ema_decay=_ema_decay)] if not has_dropout else
|
||||
[GlobalAvgPooling(),
|
||||
nn.Dropout(0.2),
|
||||
nn.DenseQuant(self.out_channels, num_classes, has_bias=True, per_channel=_per_channel,
|
||||
symmetric=_symmetric, quant_delay=_quant_delay),
|
||||
nn.FakeQuantWithMinMax(ema=True, ema_decay=_ema_decay, quant_delay=_quant_delay)])
|
||||
self.head = nn.SequentialCell(head)
|
||||
|
||||
def construct(self, x):
|
||||
x = self.input_fake(x)
|
||||
x = self.features(x)
|
||||
x = self.head(x)
|
||||
return x
|
||||
|
||||
|
||||
def mobilenetV2_quant(**kwargs):
|
||||
"""
|
||||
Constructs a MobileNet V2 model
|
||||
"""
|
||||
return MobileNetV2Quant(**kwargs)
|
|
@ -1,87 +1,103 @@
|
|||
# ResNet-50_quant Example
|
||||
# Contents
|
||||
|
||||
## Description
|
||||
- [resnet50 Description](#resnet50-description)
|
||||
- [Model Architecture](#model-architecture)
|
||||
- [Dataset](#dataset)
|
||||
- [Features](#features)
|
||||
- [Mixed Precision](#mixed-precision)
|
||||
- [Environment Requirements](#environment-requirements)
|
||||
- [Script Description](#script-description)
|
||||
- [Script and Sample Code](#script-and-sample-code)
|
||||
- [Training Process](#training-process)
|
||||
- [Evaluation Process](#evaluation-process)
|
||||
- [Evaluation](#evaluation)
|
||||
- [Model Description](#model-description)
|
||||
- [Performance](#performance)
|
||||
- [Training Performance](#evaluation-performance)
|
||||
- [Inference Performance](#evaluation-performance)
|
||||
- [Description of Random Situation](#description-of-random-situation)
|
||||
- [ModelZoo Homepage](#modelzoo-homepage)
|
||||
|
||||
This is an example of training ResNet-50_quant with ImageNet2012 dataset in MindSpore.
|
||||
# [resnet50 Description](#contents)
|
||||
|
||||
## Requirements
|
||||
ResNet-50 is a convolutional neural network that is 50 layers deep, which can classify ImageNet image nto 1000 object categories with 76% accuracy.
|
||||
|
||||
- Install [MindSpore](https://www.mindspore.cn/install/en).
|
||||
[Paper](https://arxiv.org/abs/1512.03385) Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun."Deep Residual Learning for Image Recognition." He, Kaiming , et al. "Deep Residual Learning for Image Recognition." IEEE Conference on Computer Vision & Pattern Recognition IEEE Computer Society, 2016.
|
||||
|
||||
- Download the dataset ImageNet2012
|
||||
This is the quantitative network of Resnet50.
|
||||
|
||||
> Unzip the ImageNet2012 dataset to any path you want and the folder structure should include train and eval dataset as follows:
|
||||
> ```
|
||||
> .
|
||||
> ├── ilsvrc # train dataset
|
||||
> └── ilsvrc_eval # infer dataset: images should be classified into 1000 directories firstly, just like train images
|
||||
> ```
|
||||
# [Model architecture](#contents)
|
||||
|
||||
The overall network architecture of Resnet50 is show below:
|
||||
|
||||
[Link](https://arxiv.org/pdf/1512.03385.pdf)
|
||||
|
||||
# [Dataset](#contents)
|
||||
|
||||
Dataset used: [imagenet](http://www.image-net.org/)
|
||||
|
||||
- Dataset size: ~125G, 1.2W colorful images in 1000 classes
|
||||
- Train: 120G, 1.2W images
|
||||
- Test: 5G, 50000 images
|
||||
- Data format: RGB images.
|
||||
- Note: Data will be processed in src/dataset.py
|
||||
|
||||
|
||||
## Example structure
|
||||
# [Features](#contents)
|
||||
|
||||
```shell
|
||||
.
|
||||
├── Resnet50_quant
|
||||
├── Readme.md
|
||||
## [Mixed Precision(Ascend)](#contents)
|
||||
|
||||
The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
|
||||
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
|
||||
|
||||
# [Environment Requirements](#contents)
|
||||
|
||||
- Hardware:Ascend
|
||||
- Prepare hardware environment with Ascend. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
|
||||
- Framework
|
||||
- [MindSpore](http://10.90.67.50/mindspore/archive/20200506/OpenSource/me_vm_x86/)
|
||||
- For more information, please check the resources below:
|
||||
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
|
||||
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
|
||||
|
||||
|
||||
# [Script description](#contents)
|
||||
|
||||
## [Script and sample code](#contents)
|
||||
|
||||
```python
|
||||
├── resnet50_quant
|
||||
├── Readme.md # descriptions about Resnet50-Quant
|
||||
├── scripts
|
||||
│ ├──run_train.sh
|
||||
│ ├──run_eval.sh
|
||||
│ ├──run_train.sh # shell script for train on Ascend
|
||||
│ ├──run_infer.sh # shell script for evaluation on Ascend
|
||||
├── model
|
||||
│ ├──resnet_quant.py # define the network model of resnet50-quant
|
||||
├── src
|
||||
│ ├──config.py
|
||||
│ ├──crossentropy.py
|
||||
│ ├──dataset.py
|
||||
│ ├──luanch.py
|
||||
│ ├──lr_generator.py
|
||||
│ ├──utils.py
|
||||
├── models
|
||||
│ ├──resnet_quant.py
|
||||
├── train.py
|
||||
├── eval.py
|
||||
```
|
||||
|
||||
|
||||
## Parameter configuration
|
||||
|
||||
Parameters for both training and inference can be set in config.py.
|
||||
│ ├──config.py # parameter configuration
|
||||
│ ├──dataset.py # creating dataset
|
||||
│ ├──launch.py # start python script
|
||||
│ ├──lr_generator.py # learning rate config
|
||||
│ ├──crossentropy.py # define the crossentropy of resnet50-quant
|
||||
├── train.py # training script
|
||||
├── eval.py # evaluation script
|
||||
|
||||
```
|
||||
"class_num": 1001, # dataset class number
|
||||
"batch_size": 32, # batch size of input tensor
|
||||
"loss_scale": 1024, # loss scale
|
||||
"momentum": 0.9, # momentum optimizer
|
||||
"weight_decay": 1e-4, # weight decay
|
||||
"epoch_size": 120, # only valid for taining, which is always 1 for inference
|
||||
"pretrained_epoch_size": 90, # epoch size that model has been trained before load pretrained checkpoint
|
||||
"buffer_size": 1000, # number of queue size in data preprocessing
|
||||
"image_height": 224, # image height
|
||||
"image_width": 224, # image width
|
||||
"save_checkpoint": True, # whether save checkpoint or not
|
||||
"save_checkpoint_epochs": 1, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
|
||||
"keep_checkpoint_max": 50, # only keep the last keep_checkpoint_max checkpoint
|
||||
"save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
|
||||
"warmup_epochs": 0, # number of warmup epoch
|
||||
"lr_decay_mode": "cosine", # decay mode for generating learning rate
|
||||
"label_smooth": True, # label smooth
|
||||
"label_smooth_factor": 0.1, # label smooth factor
|
||||
"lr_init": 0, # initial learning rate
|
||||
"lr_max": 0.005, # maximum learning rate
|
||||
```
|
||||
|
||||
## Running the example
|
||||
|
||||
### Train
|
||||
## [Training process](#contents)
|
||||
|
||||
### Usage
|
||||
|
||||
- Ascend: sh run_train.sh Ascend [DEVICE_NUM] [SERVER_IP(x.x.x.x)] [VISIABLE_DEVICES(0,1,2,3,4,5,6,7)] [DATASET_PATH] [CKPT_PATH]
|
||||
|
||||
You can start training using python or shell scripts. The usage of shell scripts as follows:
|
||||
|
||||
- Ascend: sh run_train.sh Ascend [DEVICE_NUM] [SERVER_IP(x.x.x.x)] [VISIABLE_DEVICES(0,1,2,3,4,5,6,7)] [DATASET_PATH][CKPT_PATH]
|
||||
### Launch
|
||||
|
||||
```
|
||||
# training example
|
||||
Ascend: sh run_train.sh Ascend 8 192.168.0.1 0,1,2,3,4,5,6,7 ~/imagenet/train/
|
||||
shell:
|
||||
Ascend: sh run_train.sh Ascend 8 10.222.223.224 0,1,2,3,4,5,6,7 ~/resnet/train/ Resnet50-90_5004.ckpt
|
||||
```
|
||||
|
||||
### Result
|
||||
|
@ -96,27 +112,76 @@ epoch: 4 step: 5004, loss is 3.2795618
|
|||
epoch: 5 step: 5004, loss is 3.1978393
|
||||
```
|
||||
|
||||
## Eval process
|
||||
## [Eval process](#contents)
|
||||
|
||||
### Usage
|
||||
|
||||
You can start training using python or shell scripts. The usage of shell scripts as follows:
|
||||
|
||||
- Ascend: sh run_infer.sh Ascend [DATASET_PATH] [CHECKPOINT_PATH]
|
||||
|
||||
### Launch
|
||||
|
||||
```
|
||||
# infer example
|
||||
Ascend: sh run_infer.sh Ascend ~/imagenet/val/ ~/checkpoint/resnet50-110_5004.ckpt
|
||||
shell:
|
||||
Ascend: sh run_infer.sh Ascend ~/imagenet/val/ ~/train/Resnet50-30_5004.ckpt
|
||||
```
|
||||
|
||||
|
||||
> checkpoint can be produced in training process.
|
||||
|
||||
#### Result
|
||||
### Result
|
||||
|
||||
Inference result will be stored in the example path, whose folder name is "infer". Under this, you can find result like the followings in log.
|
||||
Inference result will be stored in the example path, you can find result like the followings in `./eval/infer.log`.
|
||||
|
||||
```
|
||||
result: {'acc': 0.75.252054737516005} ckpt=train_parallel0/resnet-110_5004.ckpt
|
||||
result: {'acc': 0.76576314102564111}
|
||||
```
|
||||
|
||||
# [Model description](#contents)
|
||||
|
||||
## [Performance](#contents)
|
||||
|
||||
### Training Performance
|
||||
|
||||
| Parameters | Resnet50 |
|
||||
| -------------------------- | ---------------------------------------------------------- |
|
||||
| Model Version | V1 |
|
||||
| Resource | Ascend 910, cpu:2.60GHz 56cores, memory:314G |
|
||||
| uploaded Date | 06/06/2020 |
|
||||
| MindSpore Version | 0.3.0 |
|
||||
| Dataset | ImageNet |
|
||||
| Training Parameters | src/config.py |
|
||||
| Optimizer | Momentum |
|
||||
| Loss Function | SoftmaxCrossEntropy |
|
||||
| outputs | ckpt file |
|
||||
| Loss | 1.8 |
|
||||
| Accuracy | |
|
||||
| Total time | 16h |
|
||||
| Params (M) | batch_size=32, epoch=30 |
|
||||
| Checkpoint for Fine tuning | |
|
||||
| Model for inference | |
|
||||
|
||||
#### Evaluation Performance
|
||||
|
||||
| Parameters | Resnet50 |
|
||||
| -------------------------- | ----------------------------- |
|
||||
| Model Version | V1 |
|
||||
| Resource | Ascend 910 |
|
||||
| uploaded Date | 06/06/2020 |
|
||||
| MindSpore Version | 0.3.0 |
|
||||
| Dataset | ImageNet, 1.2W |
|
||||
| batch_size | 130(8P) |
|
||||
| outputs | probability |
|
||||
| Accuracy | ACC1[76.57%] ACC5[92.90%] |
|
||||
| Speed | 5ms/step |
|
||||
| Total time | 5min |
|
||||
| Model for inference | |
|
||||
|
||||
# [Description of Random Situation](#contents)
|
||||
|
||||
In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py.
|
||||
|
||||
# [ModelZoo Homepage](#contents)
|
||||
|
||||
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
|
||||
|
|
Loading…
Reference in New Issue