forked from mindspore-Ecosystem/mindspore
fix googlenet & shufflenetv2 bugs
This commit is contained in:
parent
c37aa71009
commit
ca562f53b0
|
@ -98,7 +98,7 @@ if __name__ == '__main__':
|
|||
elif args_opt.dataset_name == "imagenet":
|
||||
cfg = imagenet_cfg
|
||||
else:
|
||||
raise ValueError("Unsupport dataset.")
|
||||
raise ValueError("Unsupported dataset.")
|
||||
|
||||
# set context
|
||||
device_target = cfg.device_target
|
||||
|
@ -120,9 +120,8 @@ if __name__ == '__main__':
|
|||
init()
|
||||
rank = get_rank()
|
||||
elif device_target == "GPU":
|
||||
init()
|
||||
|
||||
if device_num > 1:
|
||||
init()
|
||||
context.reset_auto_parallel_context()
|
||||
context.set_auto_parallel_context(device_num=device_num, parallel_mode=ParallelMode.DATA_PARALLEL,
|
||||
gradients_mean=True)
|
||||
|
@ -135,7 +134,7 @@ if __name__ == '__main__':
|
|||
elif args_opt.dataset_name == "imagenet":
|
||||
dataset = create_dataset_imagenet(cfg.data_path, 1)
|
||||
else:
|
||||
raise ValueError("Unsupport dataset.")
|
||||
raise ValueError("Unsupported dataset.")
|
||||
|
||||
batch_num = dataset.get_dataset_size()
|
||||
|
||||
|
|
|
@ -18,7 +18,7 @@
|
|||
|
||||
# [ShuffleNetV2 Description](#contents)
|
||||
|
||||
ShuffleNetV2 is a much faster and more accurate netowrk than the previous networks on different platforms such as Ascend or GPU.
|
||||
ShuffleNetV2 is a much faster and more accurate network than the previous networks on different platforms such as Ascend or GPU.
|
||||
[Paper](https://arxiv.org/pdf/1807.11164.pdf) Ma, N., Zhang, X., Zheng, H. T., & Sun, J. (2018). Shufflenet v2: Practical guidelines for efficient cnn architecture design. In Proceedings of the European conference on computer vision (ECCV) (pp. 116-131).
|
||||
|
||||
# [Model architecture](#contents)
|
||||
|
@ -32,28 +32,27 @@ The overall network architecture of ShuffleNetV2 is show below:
|
|||
Dataset used: [imagenet](http://www.image-net.org/)
|
||||
|
||||
- Dataset size: ~125G, 1.2W colorful images in 1000 classes
|
||||
- Train: 120G, 1.2W images
|
||||
- Test: 5G, 50000 images
|
||||
- Train: 120G, 1.2W images
|
||||
- Test: 5G, 50000 images
|
||||
- Data format: RGB images.
|
||||
- Note: Data will be processed in src/dataset.py
|
||||
- Note: Data will be processed in src/dataset.py
|
||||
|
||||
# [Environment Requirements](#contents)
|
||||
|
||||
- Hardware(GPU)
|
||||
- Prepare hardware environment with GPU processor.
|
||||
- Prepare hardware environment with GPU processor.
|
||||
- Framework
|
||||
- [MindSpore](https://www.mindspore.cn/install/en)
|
||||
- [MindSpore](https://www.mindspore.cn/install/en)
|
||||
- For more information, please check the resources below:
|
||||
- [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
|
||||
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
|
||||
|
||||
- [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
|
||||
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
|
||||
|
||||
# [Script description](#contents)
|
||||
|
||||
## [Script and sample code](#contents)
|
||||
|
||||
```python
|
||||
+-- ShuffleNetV2
|
||||
+-- ShuffleNetV2
|
||||
+-- Readme.md # descriptions about ShuffleNetV2
|
||||
+-- scripts
|
||||
+--run_distribute_train_for_gpu.sh # shell script for distributed training
|
||||
|
@ -74,15 +73,14 @@ Dataset used: [imagenet](http://www.image-net.org/)
|
|||
|
||||
### Usage
|
||||
|
||||
|
||||
You can start training using python or shell scripts. The usage of shell scripts as follows:
|
||||
|
||||
- Ditributed training on GPU: sh run_standalone_train_for_gpu.sh [DEVICE_NUM] [VISIABLE_DEVICES(0,1,2,3,4,5,6,7)] [DATASET_PATH]
|
||||
- Distributed training on GPU: sh run_standalone_train_for_gpu.sh [DEVICE_NUM] [VISIABLE_DEVICES(0,1,2,3,4,5,6,7)] [DATASET_PATH]
|
||||
- Standalone training on GPU: sh run_standalone_train_for_gpu.sh [DATASET_PATH]
|
||||
|
||||
### Launch
|
||||
|
||||
```
|
||||
```bash
|
||||
# training example
|
||||
python:
|
||||
GPU: mpirun --allow-run-as-root -n 8 --output-filename log_output --merge-stderr-to-stdout python train.py --is_distributed=True --platform='GPU' --dataset_path='~/imagenet/train/' > train.log 2>&1 &
|
||||
|
@ -105,13 +103,13 @@ You can start evaluation using python or shell scripts. The usage of shell scrip
|
|||
|
||||
### Launch
|
||||
|
||||
```
|
||||
```bash
|
||||
# infer example
|
||||
python:
|
||||
GPU: CUDA_VISIBLE_DEVICES=0 python eval.py --platform='GPU' --dataset_path='~/imagenet/val/' > eval.log 2>&1 &
|
||||
|
||||
shell:
|
||||
GPU: cd scripts & sh run_eval_for_gpu.sh '~/imagenet/val/' 'checkpoint_file'
|
||||
GPU: cd scripts & sh run_eval_for_gpu.sh '~/imagenet/val/' 'checkpoint_file'
|
||||
```
|
||||
|
||||
> checkpoint can be produced in training process.
|
||||
|
@ -150,7 +148,6 @@ Inference result will be stored in the example path, you can find result in `eva
|
|||
| outputs | probability |
|
||||
| Accuracy | acc=69.4%(TOP1) |
|
||||
|
||||
|
||||
# [ModelZoo Homepage](#contents)
|
||||
|
||||
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
|
||||
|
|
Loading…
Reference in New Issue