mindspore/model_zoo/research/cv/resnet50_adv_pruning
root 4e85071055 redundant codes clean 2020-09-22 17:08:09 +08:00
..
src redundant codes clean 2020-09-22 17:08:09 +08:00
Readme.md update links for README 2020-09-19 18:43:35 +08:00
eval.py add ghostnet, ghostnet_quant, ssd_ghostnet and resnet50_adv_prune to model_zoo/research 2020-09-15 10:58:11 +08:00
index.txt add ghostnet, ghostnet_quant, ssd_ghostnet and resnet50_adv_prune to model_zoo/research 2020-09-15 10:58:11 +08:00
mindpsore_hub_conf.py modify ghostnet for hub 2020-09-21 15:54:11 +08:00

Readme.md

Contents

Adversarial Pruning Description

The Adversarial Pruning method is a reliable neural network pruning algorithm by setting up a scientific control. We prefer to have a more rigorous research design by including a scientific control group as an essential part to minimize the effect of all factors except the association between the filter and expected network output. Acting as a control group, knockoff feature is generated to mimic the feature map produced by the network filter, but they are conditionally independent of the example label given the real feature map. Besides the real feature map on an intermediate layer, the corresponding knockoff feature is brought in as another auxiliary input signal for the subsequent layers.

Paper: Yehui Tang, Yunhe Wang, Yixing Xu, Dacheng Tao, Chunjing Xu, Chao Xu, Chang Xu. Scientific Control for Reliable Neural Network Pruning. Submitted to NeurIPS 2020.

Dataset

Dataset used: Oxford-IIIT Pet

  • Dataset size: 7049 colorful images in 1000 classes
    • Train: 3680 images
    • Test: 3369 images
  • Data format: RGB images.
    • Note: Data will be processed in src/dataset.py

Environment Requirements

Script description

Script and sample code

├── Adversarial Pruning
  ├── Readme.md     # descriptions about adversarial-pruning   # shell script for evaluation with CPU, GPU or Ascend
  ├── src
     ├──config.py      # parameter configuration
     ├──dataset.py     # creating dataset
     ├──resnet_imgnet.py      # Pruned ResNet architecture
  ├── eval.py       # evaluation script
  ├── index.txt       # channel index of each layer after pruning
  ├── mindspore_hub_conf.py       # export model for hub

Training process

To Be Done

Eval process

Usage

After installing MindSpore via the official website, you can start evaluation as follows:

Launch

# infer example
  
  Ascend: python eval.py --dataset_path ~/Pets/test.mindrecord --platform Ascend --checkpoint_path [CHECKPOINT_PATH]
  GPU: python eval.py --dataset_path ~/Pets/test.mindrecord --platform GPU --checkpoint_path [CHECKPOINT_PATH]

checkpoint can be produced in training process.

Result

result: {'acc': 0.8023984736985554} ckpt= ./resnet50-imgnet-0.65x-80.24.ckpt

Model Description

Performance

Evaluation Performance

ResNet50-0.65x on ImageNet2012
Parameters
Model Version ResNet50-0.65x
uploaded Date 09/10/2020 (month/day/year)
MindSpore Version 0.6.0-alpha
Dataset ImageNet2012
Parameters (M) 14.6
FLOPs (G) 2.1
Accuracy (Top1) 75.80
ResNet50-0.65x on Oxford-IIIT Pet
Parameters
Model Version ResNet50-0.65x
uploaded Date 09/10/2020 (month/day/year)
MindSpore Version 0.6.0-alpha
Dataset Oxford-IIIT Pet
Parameters (M) 14.6
FLOPs (M) 2.1
Accuracy (Top1) 80.24

Description of Random Situation

In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py.

ModelZoo Homepage

Please check the official homepage.