Ting Wang
ac627e046b
Signed-off-by: Ting Wang <kathy.wangting@huawei.com> |
||
---|---|---|
.. | ||
src | ||
Readme.md | ||
eval.py | ||
mindpsore_hub_conf.py |
Readme.md
Contents
- GhostNet Description
- Model Architecture
- Dataset
- Environment Requirements
- Script Description
- Model Description
- Description of Random Situation
- ModelZoo Homepage
GhostNet Description
The GhostNet architecture is based on an Ghost module structure which generate more features from cheap operations. Based on a set of intrinsic feature maps, a series of cheap operations are applied to generate many ghost feature maps that could fully reveal information underlying intrinsic features.
Paper: Kai Han, Yunhe Wang, Qi Tian, Jianyuan Guo, Chunjing Xu, Chang Xu. GhostNet: More Features from Cheap Operations. CVPR 2020.
Model architecture
The overall network architecture of GhostNet is show below:
Dataset
Dataset used: Oxford-IIIT Pet
- Dataset size: 7049 colorful images in 1000 classes
- Train: 3680 images
- Test: 3369 images
- Data format: RGB images.
- Note: Data will be processed in src/dataset.py
Environment Requirements
- Hardware(Ascend/GPU)
- Prepare hardware environment with Ascend or GPU. If you want to try Ascend, please send the application form to ascend@huawei.com. Once approved, you can get the resources.
- Framework
- For more information, please check the resources below:
Script description
Script and sample code
├── GhostNet
├── Readme.md # descriptions about ghostnet # shell script for evaluation with CPU, GPU or Ascend
├── src
│ ├──config.py # parameter configuration
│ ├──dataset.py # creating dataset
│ ├──launch.py # start python script
│ ├──lr_generator.py # learning rate config
│ ├──ghostnet.py # GhostNet architecture
│ ├──ghostnet600.py # GhostNet-600M architecture
├── eval.py # evaluation script
├── mindspore_hub_conf.py # export model for hub
Training process
To Be Done
Eval process
Usage
After installing MindSpore via the official website, you can start evaluation as follows:
Launch
# infer example
Ascend: python eval.py --model [ghostnet/ghostnet-600] --dataset_path ~/Pets/test.mindrecord --platform Ascend --checkpoint_path [CHECKPOINT_PATH]
GPU: python eval.py --model [ghostnet/ghostnet-600] --dataset_path ~/Pets/test.mindrecord --platform GPU --checkpoint_path [CHECKPOINT_PATH]
checkpoint can be produced in training process.
Result
result: {'acc': 0.8113927500681385} ckpt= ./ghostnet_nose_1x_pets.ckpt
result: {'acc': 0.824475333878441} ckpt= ./ghostnet_1x_pets.ckpt
result: {'acc': 0.8691741618969746} ckpt= ./ghostnet600M_pets.ckpt
Model Description
Performance
Evaluation Performance
GhostNet on ImageNet2012
Parameters | ||
---|---|---|
Model Version | GhostNet | GhostNet-600 |
uploaded Date | 09/08/2020 (month/day/year) ; | 09/08/2020 (month/day/year) |
MindSpore Version | 0.6.0-alpha | 0.6.0-alpha |
Dataset | ImageNet2012 | ImageNet2012 |
Parameters (M) | 5.2 | 11.9 |
FLOPs (M) | 142 | 591 |
Accuracy (Top1) | 73.9 | 80.2 |
GhostNet on Oxford-IIIT Pet
Parameters | ||
---|---|---|
Model Version | GhostNet | GhostNet-600 |
uploaded Date | 09/08/2020 (month/day/year) ; | 09/08/2020 (month/day/year) |
MindSpore Version | 0.6.0-alpha | 0.6.0-alpha |
Dataset | Oxford-IIIT Pet | Oxford-IIIT Pet |
Parameters (M) | 3.9 | 10.6 |
FLOPs (M) | 140 | 590 |
Accuracy (Top1) | 82.4 | 86.9 |
Comparison with other methods on Oxford-IIIT Pet
Model | FLOPs (M) | Latency (ms)* | Accuracy (Top1) |
---|---|---|---|
MobileNetV2-1x | 300 | 28.2 | 78.5 |
Ghost-1x w\o SE | 138 | 19.1 | 81.1 |
Ghost-1x | 140 | 25.3 | 82.4 |
Ghost-600 | 590 | - | 86.9 |
*The latency is measured on Huawei Kirin 990 chip under single-threaded mode with batch size 1.
Description of Random Situation
In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py.
ModelZoo Homepage
Please check the official homepage.