forked from mindspore-Ecosystem/mindspore
modify for resnet readme
This commit is contained in:
parent
52ed1ea725
commit
e0ac982589
|
@ -11,10 +11,7 @@
|
|||
- [Script and Sample Code](#script-and-sample-code)
|
||||
- [Script Parameters](#script-parameters)
|
||||
- [Training Process](#training-process)
|
||||
- [Training](#training)
|
||||
- [Distributed Training](#distributed-training)
|
||||
- [Evaluation Process](#evaluation-process)
|
||||
- [Evaluation](#evaluation)
|
||||
- [Model Description](#model-description)
|
||||
- [Performance](#performance)
|
||||
- [Evaluation Performance](#evaluation-performance)
|
||||
|
@ -26,14 +23,14 @@
|
|||
## Description
|
||||
ResNet (residual neural network) was proposed by Kaiming He and other four Chinese of Microsoft Research Institute. Through the use of ResNet unit, it successfully trained 152 layers of neural network, and won the championship in ilsvrc2015. The error rate on top 5 was 3.57%, and the parameter quantity was lower than vggnet, so the effect was very outstanding. Traditional convolution network or full connection network will have more or less information loss. At the same time, it will lead to the disappearance or explosion of gradient, which leads to the failure of deep network training. ResNet solves this problem to a certain extent. By passing the input information to the output, the integrity of the information is protected. The whole network only needs to learn the part of the difference between input and output, which simplifies the learning objectives and difficulties.The structure of ResNet can accelerate the training of neural network very quickly, and the accuracy of the model is also greatly improved. At the same time, ResNet is very popular, even can be directly used in the concept net network.
|
||||
|
||||
These are examples of training ResNet-50/ResNet-101/SE-ResNet50 with CIFAR-10/ImageNet2012 dataset in MindSpore.ResNet-50 and ResNet101 can reference parper 1 below, and SE-ResNet-50 is a variant of ResNet-50 which reference paper [link](https://arxiv.org/abs/1709.01507) and [link](https://arxiv.org/abs/1812.01187) below, Training SE-ResNet-50 for just 24 epochs using 8 Ascend 910, we can reach top-1 accuracy of 75.9%.(Training ResNet-101 with dataset CIFAR-10 and SE-ResNet50 with CIFAR-10 is is not supported yet.)
|
||||
These are examples of training ResNet50/ResNet101/SE-ResNet50 with CIFAR-10/ImageNet2012 dataset in MindSpore.ResNet50 and ResNet101 can reference [paper 1](https://arxiv.org/pdf/1512.03385.pdf) below, and SE-ResNet50 is a variant of ResNet50 which reference [paper 2](https://arxiv.org/abs/1709.01507) and [paper 3](https://arxiv.org/abs/1812.01187) below, Training SE-ResNet50 for just 24 epochs using 8 Ascend 910, we can reach top-1 accuracy of 75.9%.(Training ResNet101 with dataset CIFAR-10 and SE-ResNet50 with CIFAR-10 is not supported yet.)
|
||||
|
||||
## Paper
|
||||
1.(https://arxiv.org/pdf/1512.03385.pdf):Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. "Deep Residual Learning for Image Recognition"
|
||||
1.[paper](https://arxiv.org/pdf/1512.03385.pdf):Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. "Deep Residual Learning for Image Recognition"
|
||||
|
||||
2.(https://arxiv.org/abs/1709.01507):Jie Hu, Li Shen, Samuel Albanie, Gang Sun, Enhua Wu. "Squeeze-and-Excitation Networks"
|
||||
2.[paper](https://arxiv.org/abs/1709.01507):Jie Hu, Li Shen, Samuel Albanie, Gang Sun, Enhua Wu. "Squeeze-and-Excitation Networks"
|
||||
|
||||
3.(https://arxiv.org/abs/1812.01187):Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, Mu Li. "Bag of Tricks for Image Classification with Convolutional Neural Networks"
|
||||
3.[paper](https://arxiv.org/abs/1812.01187):Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, Mu Li. "Bag of Tricks for Image Classification with Convolutional Neural Networks"
|
||||
|
||||
# [Model Architecture](#contents)
|
||||
|
||||
|
@ -43,9 +40,9 @@ The overall network architecture of ResNet is show below:
|
|||
# [Dataset](#contents)
|
||||
|
||||
Dataset used: [CIFAR-10](<http://www.cs.toronto.edu/~kriz/cifar.html>)
|
||||
- Dataset size:175M,60,000 32*32 colorful images in 10 classes
|
||||
- Train:146M,50,000 images
|
||||
- Test:29.3M,10,000 images
|
||||
- Dataset size:60,000 32*32 colorful images in 10 classes
|
||||
- Train:50,000 images
|
||||
- Test: 10,000 images
|
||||
- Data format:binary files
|
||||
- Note:Data will be processed in dataset.py
|
||||
- Download the dataset, the directory structure is as follows:
|
||||
|
@ -56,14 +53,14 @@ Dataset used: [CIFAR-10](<http://www.cs.toronto.edu/~kriz/cifar.html>)
|
|||
└─cifar-10-verify-bin
|
||||
```
|
||||
|
||||
Dataset used: [imagenet](http://www.image-net.org/)
|
||||
Dataset used: [ImageNet2012](http://www.image-net.org/)
|
||||
|
||||
- Dataset size: ~125G, 1.2W colorful images in 1000 classes
|
||||
- Train: 120G, 1.2W images
|
||||
- Test: 5G, 50000 images
|
||||
- Data format: RGB images.
|
||||
- Note: Data will be processed in src/dataset.py
|
||||
Download the dataset CIFAR-10 or ImageNet2012
|
||||
- Dataset size 224*224 colorful images in 1000 classes
|
||||
- Train:1,281,167 images
|
||||
- Test: 50,000 images
|
||||
- Data format:jpeg
|
||||
- Note:Data will be processed in dataset.py
|
||||
- Download the dataset, the directory structure is as follows:
|
||||
|
||||
```
|
||||
└─dataset
|
||||
|
@ -94,7 +91,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil
|
|||
|
||||
After installing MindSpore via the official website, you can start training and evaluation as follows:
|
||||
|
||||
- runing on Ascend
|
||||
- Runing on Ascend
|
||||
```
|
||||
# distributed training
|
||||
Usage: sh run_distribute_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
|
||||
|
@ -107,7 +104,7 @@ Usage: sh run_standalone_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imag
|
|||
Usage: sh run_eval.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
|
||||
```
|
||||
|
||||
- runing on GPU
|
||||
- Runing on GPU
|
||||
```
|
||||
# distributed training example
|
||||
sh run_distribute_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
|
||||
|
@ -128,10 +125,10 @@ sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [C
|
|||
└──resnet
|
||||
├── README.md
|
||||
├── script
|
||||
├── run_distribute_train.sh # launch distributed training(8 pcs)
|
||||
├── run_parameter_server_train.sh # launch Ascend parameter server training(8 pcs)
|
||||
├── run_eval.sh # launch evaluation
|
||||
├── run_standalone_train.sh # launch standalone training(1 pcs)
|
||||
├── run_distribute_train.sh # launch ascend distributed training(8 pcs)
|
||||
├── run_parameter_server_train.sh # launch ascend parameter server training(8 pcs)
|
||||
├── run_eval.sh # launch ascend evaluation
|
||||
├── run_standalone_train.sh # launch ascend standalone training(1 pcs)
|
||||
├── run_distribute_train_gpu.sh # launch gpu distributed training(8 pcs)
|
||||
├── run_parameter_server_train_gpu.sh # launch gpu parameter server training(8 pcs)
|
||||
├── run_eval_gpu.sh # launch gpu evaluation
|
||||
|
@ -141,7 +138,7 @@ sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [C
|
|||
├── dataset.py # data preprocessing
|
||||
├── crossentropy.py # loss definition for ImageNet2012 dataset
|
||||
├── lr_generator.py # generate learning rate for each step
|
||||
└── resnet.py # resnet backbone, including resnet50 and resnet101
|
||||
└── resnet.py # resnet backbone, including resnet50 and resnet101 and se-resnet50
|
||||
├── eval.py # eval net
|
||||
└── train.py # train net
|
||||
```
|
||||
|
@ -150,7 +147,7 @@ sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [C
|
|||
|
||||
Parameters for both training and evaluation can be set in config.py.
|
||||
|
||||
- config for ResNet-50, CIFAR-10 dataset
|
||||
- Config for ResNet50, CIFAR-10 dataset
|
||||
|
||||
```
|
||||
"class_num": 10, # dataset class num
|
||||
|
@ -171,7 +168,7 @@ Parameters for both training and evaluation can be set in config.py.
|
|||
"lr_max": 0.1, # maximum learning rate
|
||||
```
|
||||
|
||||
- config for ResNet-50, ImageNet2012 dataset
|
||||
- Config for ResNet50, ImageNet2012 dataset
|
||||
|
||||
```
|
||||
"class_num": 1001, # dataset class number
|
||||
|
@ -193,7 +190,7 @@ Parameters for both training and evaluation can be set in config.py.
|
|||
"lr_max": 0.1, # maximum learning rate
|
||||
```
|
||||
|
||||
- config for ResNet-101, ImageNet2012 dataset
|
||||
- Config for ResNet101, ImageNet2012 dataset
|
||||
|
||||
```
|
||||
"class_num": 1001, # dataset class number
|
||||
|
@ -214,7 +211,7 @@ Parameters for both training and evaluation can be set in config.py.
|
|||
"lr": 0.1 # base learning rate
|
||||
```
|
||||
|
||||
- config for SE-ResNet-50, ImageNet2012 dataset
|
||||
- Config for SE-ResNet50, ImageNet2012 dataset
|
||||
|
||||
```
|
||||
"class_num": 1001, # dataset class number
|
||||
|
@ -239,9 +236,9 @@ Parameters for both training and evaluation can be set in config.py.
|
|||
```
|
||||
|
||||
## [Training Process](#contents)
|
||||
### Usage
|
||||
#### Running on Ascend
|
||||
|
||||
### Training
|
||||
- running on Ascend
|
||||
```
|
||||
# distributed training
|
||||
Usage: sh run_distribute_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
|
||||
|
@ -256,13 +253,40 @@ Usage: sh run_eval.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [D
|
|||
```
|
||||
For distributed training, a hccl configuration file with JSON format needs to be created in advance.
|
||||
|
||||
Please follow the instructions in the link below:
|
||||
|
||||
https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools.
|
||||
Please follow the instructions in the link [hccn_tools](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools).
|
||||
|
||||
Training result will be stored in the example path, whose folder name begins with "train" or "train_parallel". Under this, you can find checkpoint file together with result like the followings in log.
|
||||
|
||||
- training ResNet-50 with CIFAR-10 dataset
|
||||
#### Running on GPU
|
||||
|
||||
```
|
||||
# distributed training example
|
||||
sh run_distribute_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
|
||||
|
||||
# standalone training example
|
||||
sh run_standalone_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
|
||||
|
||||
# infer example
|
||||
sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
|
||||
```
|
||||
For distributed training, a hccl configuration file with JSON format needs to be created in advance.
|
||||
|
||||
#### Running parameter server mode training
|
||||
|
||||
- Parameter server training Ascend example
|
||||
|
||||
```
|
||||
sh run_parameter_server_train.sh [resnet50|resnet101] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
|
||||
```
|
||||
|
||||
- Parameter server training GPU example
|
||||
```
|
||||
sh run_parameter_server_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
|
||||
```
|
||||
|
||||
### Result
|
||||
|
||||
- Training ResNet50 with CIFAR-10 dataset
|
||||
|
||||
```
|
||||
# distribute training result(8 pcs)
|
||||
|
@ -274,7 +298,7 @@ epoch: 5 step: 195, loss is 1.393667
|
|||
...
|
||||
```
|
||||
|
||||
- training ResNet-50 with ImageNet2012 dataset
|
||||
- Training ResNet50 with ImageNet2012 dataset
|
||||
|
||||
```
|
||||
# distribute training result(8 pcs)
|
||||
|
@ -286,7 +310,7 @@ epoch: 5 step: 5004, loss is 3.1978393
|
|||
...
|
||||
```
|
||||
|
||||
- training ResNet-101 with ImageNet2012 dataset
|
||||
- Training ResNet101 with ImageNet2012 dataset
|
||||
|
||||
```
|
||||
# distribute training result(8p)
|
||||
|
@ -302,7 +326,7 @@ epoch: 69 step: 5004, loss is 2.0665488
|
|||
epoch: 70 step: 5004, loss is 1.8717369
|
||||
...
|
||||
```
|
||||
- training SE-ResNet-50 with ImageNet2012 dataset
|
||||
- Training SE-ResNet50 with ImageNet2012 dataset
|
||||
|
||||
```
|
||||
# distribute training result(8 pcs)
|
||||
|
@ -314,26 +338,11 @@ epoch: 5 step: 5004, loss is 3.3501816
|
|||
...
|
||||
```
|
||||
|
||||
- running on GPU
|
||||
|
||||
```
|
||||
# distributed training example
|
||||
sh run_distribute_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
|
||||
|
||||
# standalone training example
|
||||
sh run_standalone_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
|
||||
|
||||
# infer example
|
||||
sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
|
||||
```
|
||||
|
||||
## [Evaluation Process](#contents)
|
||||
|
||||
### Evaluation
|
||||
|
||||
- evaluation on CIFAR-10 dataset when running on Ascend
|
||||
#### Usage
|
||||
### Usage
|
||||
|
||||
#### Running on Ascend
|
||||
```
|
||||
# evaluation
|
||||
Usage: sh run_eval.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
|
||||
|
@ -346,63 +355,53 @@ sh run_eval.sh resnet50 cifar10 ~/cifar10-10-verify-bin ~/resnet50_cifar10/train
|
|||
|
||||
> checkpoint can be produced in training process.
|
||||
|
||||
#### Result
|
||||
#### Running on GPU
|
||||
```
|
||||
sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
|
||||
```
|
||||
|
||||
### Result
|
||||
|
||||
Evaluation result will be stored in the example path, whose folder name is "eval". Under this, you can find result like the followings in log.
|
||||
|
||||
- evaluating ResNet-50 with CIFAR-10 dataset
|
||||
- Evaluating ResNet50 with CIFAR-10 dataset
|
||||
|
||||
```
|
||||
result: {'acc': 0.91446314102564111} ckpt=~/resnet50_cifar10/train_parallel0/resnet-90_195.ckpt
|
||||
```
|
||||
|
||||
- evaluating ResNet-50 with ImageNet2012 dataset
|
||||
- Evaluating ResNet50 with ImageNet2012 dataset
|
||||
|
||||
```
|
||||
result: {'acc': 0.7671054737516005} ckpt=train_parallel0/resnet-90_5004.ckpt
|
||||
```
|
||||
|
||||
- evaluating ResNet-101 with ImageNet2012 dataset
|
||||
- Evaluating ResNet101 with ImageNet2012 dataset
|
||||
|
||||
```
|
||||
result: {'top_5_accuracy': 0.9429417413572343, 'top_1_accuracy': 0.7853513124199744} ckpt=train_parallel0/resnet-120_5004.ckpt
|
||||
```
|
||||
|
||||
- evaluating SE-ResNet-50 with ImageNet2012 dataset
|
||||
- Evaluating SE-ResNet50 with ImageNet2012 dataset
|
||||
|
||||
```
|
||||
result: {'top_5_accuracy': 0.9342589628681178, 'top_1_accuracy': 0.768065781049936} ckpt=train_parallel0/resnet-24_5004.ckpt
|
||||
|
||||
```
|
||||
### Running on GPU
|
||||
#### infer example
|
||||
sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
|
||||
```
|
||||
### Running parameter server mode training
|
||||
```
|
||||
### Parameter server training Ascend example
|
||||
```
|
||||
sh run_parameter_server_train.sh [resnet50|resnet101] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
|
||||
|
||||
### Parameter server training GPU example
|
||||
sh run_parameter_server_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
|
||||
```
|
||||
> The way to evaluate is the same as the examples above.
|
||||
|
||||
|
||||
# [Model Description](#contents)
|
||||
## [Performance](#contents)
|
||||
|
||||
### Evaluation Performance
|
||||
|
||||
#### ResNet50 on cifar10
|
||||
#### ResNet50 on CIFAR-10
|
||||
| Parameters | Ascend 910 | GPU |
|
||||
| -------------------------- | -------------------------------------- |---------------------------------- |
|
||||
| Model Version | ResNet50-v1.5 |ResNet50-v1.5|
|
||||
| Resource | Ascend 910,CPU 2.60GHz 56cores,Memory 314G | GPU,CPU 2.1GHz 24cores,Memory 128G
|
||||
| Resource | Ascend 910,CPU 2.60GHz 56cores,Memory 314G | GPU(Tesla V100 SXM2),CPU 2.1GHz 24cores,Memory 128G
|
||||
| uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year)
|
||||
| MindSpore Version | 0.1.0-alpha |0.6.0-alpha |
|
||||
| Dataset | cifar10 | cifar10|
|
||||
| Dataset | CIFAR-10 | CIFAR-10
|
||||
| Training Parameters | epoch=90, steps per epoch=195, batch_size = 32 |epoch=90, steps per epoch=195, batch_size = 32 |
|
||||
| Optimizer | Momentum |Momentum|
|
||||
| Loss Function | Softmax Cross Entropy |Softmax Cross Entropy |
|
||||
|
@ -412,13 +411,13 @@ sh run_parameter_server_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012]
|
|||
| Total time | 6 mins | 20.2 mins|
|
||||
| Parameters (M) | 25.5 | 25.5 |
|
||||
| Checkpoint for Fine tuning | 179.7M (.ckpt file) |179.7M (.ckpt file)|
|
||||
| Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet |https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet |
|
||||
| Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
|
||||
|
||||
#### ResNet50 on imagenet2012
|
||||
#### ResNet50 on ImageNet2012
|
||||
| Parameters | Ascend 910 | GPU |
|
||||
| -------------------------- | -------------------------------------- |---------------------------------- |
|
||||
| Model Version | ResNet50-v1.5 |ResNet50-v1.5|
|
||||
| Resource | Ascend 910,CPU 2.60GHz 56cores,Memory 314G | GPU,CPU 2.1GHz 24cores,Memory 128G
|
||||
| Resource | Ascend 910,CPU 2.60GHz 56cores,Memory 314G | GPU(Tesla V100 SXM2),CPU 2.1GHz 24cores,Memory 128G
|
||||
| uploaded Date | 04/01/2020 (month/day/year) ; | 08/01/2020 (month/day/year)
|
||||
| MindSpore Version | 0.1.0-alpha |0.6.0-alpha |
|
||||
| Dataset | ImageNet2012 | ImageNet2012|
|
||||
|
@ -431,14 +430,14 @@ sh run_parameter_server_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012]
|
|||
| Total time | 139 mins | 258 mins|
|
||||
| Parameters (M) | 25.5 | 25.5 |
|
||||
| Checkpoint for Fine tuning | 197M (.ckpt file) |197M (.ckpt file) |
|
||||
| Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet |https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet |
|
||||
| Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
|
||||
|
||||
#### ResNet101 on imagenet2012
|
||||
#### ResNet101 on ImageNet2012
|
||||
| Parameters | Ascend 910 | GPU |
|
||||
| -------------------------- | -------------------------------------- |---------------------------------- |
|
||||
| Model Version | ResNet101 |ResNet101|
|
||||
| Resource | Ascend 910,CPU 2.60GHz 56cores,Memory 314G | GPU,CPU 2.1GHz 24cores,Memory 128G
|
||||
| uploaded Date | 04/01/2020 (month/day/year) | 08/14/2020 (month/day/year)
|
||||
| Resource | Ascend 910,CPU 2.60GHz 56cores,Memory 314G | GPU(Tesla V100 SXM2),CPU 2.1GHz 24cores,Memory 128G
|
||||
| uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year)
|
||||
| MindSpore Version | 0.1.0-alpha |0.6.0-alpha |
|
||||
| Dataset | ImageNet2012 | ImageNet2012|
|
||||
| Training Parameters | epoch=120, steps per epoch=5004, batch_size = 32 |epoch=120, steps per epoch=5004, batch_size = 32 |
|
||||
|
@ -450,16 +449,15 @@ sh run_parameter_server_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012]
|
|||
| Total time | 301 mins | 1100 mins|
|
||||
| Parameters (M) | 44.6 | 44.6 |
|
||||
| Checkpoint for Fine tuning | 343M (.ckpt file) |343M (.ckpt file) |
|
||||
| Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet |https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet |
|
||||
|
||||
#### SE-ResNet50 on imagenet2012
|
||||
| Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
|
||||
|
||||
#### SE-ResNet50 on ImageNet2012
|
||||
| Parameters | Ascend 910
|
||||
| -------------------------- | ------------------------------------------------------------------------ |
|
||||
| Model Version | SE-ResNet50 |
|
||||
| Resource | Ascend 910,CPU 2.60GHz 56cores,Memory 314G |
|
||||
| uploaded Date | 08/16/2020 (month/day/year) ; |
|
||||
| MindSpore Version | 0.6.0-alpha |
|
||||
| MindSpore Version | 0.7.0-alpha |
|
||||
| Dataset | ImageNet2012 |
|
||||
| Training Parameters | epoch=24, steps per epoch=5004, batch_size = 32 |
|
||||
| Optimizer | Momentum |
|
||||
|
@ -470,7 +468,7 @@ sh run_parameter_server_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012]
|
|||
| Total time | 49.3 mins |
|
||||
| Parameters (M) | 25.5 |
|
||||
| Checkpoint for Fine tuning | 215.9M (.ckpt file) |
|
||||
| Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet |https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet |
|
||||
| Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
|
||||
|
||||
# [Description of Random Situation](#contents)
|
||||
|
||||
|
|
Loading…
Reference in New Issue