6.2 KiB
Contents
- GhostNet Description
- Quantization Description
- Model Architecture
- Dataset
- Environment Requirements
- Script Description
- Model Description
- Description of Random Situation
- ModelZoo Homepage
GhostNet Description
The GhostNet architecture is based on an Ghost module structure which generate more features from cheap operations. Based on a set of intrinsic feature maps, a series of cheap operations are applied to generate many ghost feature maps that could fully reveal information underlying intrinsic features.
Paper: Kai Han, Yunhe Wang, Qi Tian, Jianyuan Guo, Chunjing Xu, Chang Xu. GhostNet: More Features from Cheap Operations. CVPR 2020.
Quantization Description
Quantization refers to techniques for performing computations and storing tensors at lower bitwidths than floating point precision. For 8bit quantization, we quantize the weights into [-128,127] and the activations into [0,255]. We finetune the model a few epochs after post-quantization to achieve better performance.
Model architecture
The overall network architecture of GhostNet is show below:
Dataset
Dataset used: Oxford-IIIT Pet
- Dataset size: 7049 colorful images in 1000 classes
- Train: 3680 images
- Test: 3369 images
- Data format: RGB images.
- Note: Data will be processed in src/dataset.py
Environment Requirements
- Hardware(Ascend/GPU)
- Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend, please send the application form to ascend@huawei.com. Once approved, you can get the resources.
- Framework
- For more information, please check the resources below:
Script description
Script and sample code
├── GhostNet
├── Readme.md # descriptions about GhostNet # shell script for evaluation with CPU, GPU or Ascend
├── src
│ ├──config.py # parameter configuration
│ ├──dataset.py # creating dataset
│ ├──launch.py # start python script
│ ├──lr_generator.py # learning rate config
│ ├──ghostnet.py # GhostNet architecture
│ ├──quant.py # GhostNet quantization
├── eval.py # evaluation script
├── mindspore_hub_conf.py # export model for hub
Training process
To Be Done
Eval process
Usage
After installing MindSpore via the official website, you can start evaluation as follows:
Launch
# infer example
Ascend: python eval.py --dataset_path ~/Pets/test.mindrecord --platform Ascend --checkpoint_path [CHECKPOINT_PATH]
GPU: python eval.py --dataset_path ~/Pets/test.mindrecord --platform GPU --checkpoint_path [CHECKPOINT_PATH]
checkpoint can be produced in training process.
Result
result: {'acc': 0.825} ckpt= ./ghostnet_1x_pets_int8.ckpt
Model Description
Performance
Evaluation Performance
GhostNet on ImageNet2012
Parameters | ||
---|---|---|
Model Version | GhostNet | GhostNet-int8 |
uploaded Date | 09/08/2020 (month/day/year) ; | 09/08/2020 (month/day/year) |
MindSpore Version | 0.6.0-alpha | 0.6.0-alpha |
Dataset | ImageNet2012 | ImageNet2012 |
Parameters (M) | 5.2 | / |
FLOPs (M) | 142 | / |
Accuracy (Top1) | 73.9 | w/o finetune:72.2, w finetune:73.6 |
GhostNet on Oxford-IIIT Pet
Parameters | ||
---|---|---|
Model Version | GhostNet | GhostNet-int8 |
uploaded Date | 09/08/2020 (month/day/year) ; | 09/08/2020 (month/day/year) |
MindSpore Version | 0.6.0-alpha | 0.6.0-alpha |
Dataset | Oxford-IIIT Pet | Oxford-IIIT Pet |
Parameters (M) | 3.9 | / |
FLOPs (M) | 140 | / |
Accuracy (Top1) | 82.4 | w/o finetune:81.66, w finetune:82.45 |
Description of Random Situation
In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py.
ModelZoo Homepage
Please check the official homepage.