mindspore-ci-bot
921d344594
From: @zhouyaqiang0 Reviewed-by: Signed-off-by: |
||
---|---|---|
.. | ||
scripts | ||
src | ||
README.md | ||
eval.py | ||
export.py | ||
mindspore_hub_conf.py | ||
train.py |
README.md
Contents
- InceptionV3 Description
- Model Architecture
- Dataset
- Features
- Environment Requirements
- Script Description
- Model Description
- Description of Random Situation
- ModelZoo Homepage
InceptionV3 Description
InceptionV3 by Google is the 3rd version in a series of Deep Learning Convolutional Architectures. Inception v3 mainly focuses on burning less computational power by modifying the previous Inception architectures. This idea was proposed in the paper Rethinking the Inception Architecture for Computer Vision, published in 2015.
Paper Min Sun, Ali Farhadi, Steve Seitz. Ranking Domain-Specific Highlights by Analyzing Edited Videos[J]. 2014.
Model architecture
The overall network architecture of InceptionV3 is show below:
Dataset
Dataset used can refer to paper.
- Dataset size: 125G, 1250k colorful images in 1000 classes
- Train: 120G, 1200k images
- Test: 5G, 50k images
- Data format: RGB images.
- Note: Data will be processed in src/dataset.py
Features
Mixed Precision(Ascend)
The mixed precision training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
Environment Requirements
- Hardware(Ascend/GPU)
- Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend, please send the application form to ascend@huawei.com. Once approved, you can get the resources.
- Framework
- For more information, please check the resources below:
Script description
Script and sample code
.
└─Inception-v3
├─README.md
├─scripts
├─run_standalone_train.sh # launch standalone training with ascend platform(1p)
├─run_standalone_train_gpu.sh # launch standalone training with gpu platform(1p)
├─run_distribute_train.sh # launch distributed training with ascend platform(8p)
├─run_distribute_train_gpu.sh # launch distributed training with gpu platform(8p)
├─run_eval.sh # launch evaluating with ascend platform
└─run_eval_gpu.sh # launch evaluating with gpu platform
├─src
├─config.py # parameter configuration
├─dataset.py # data preprocessing
├─inception_v3.py # network definition
├─loss.py # Customized CrossEntropy loss function
├─lr_generator.py # learning rate generator
├─eval.py # eval net
├─export.py # convert checkpoint
└─train.py # train net
Script Parameters
Major parameters in train.py and config.py are:
'random_seed' # fix random seed
'work_nums' # number of workers to read the data
'decay_method' # learning rate scheduler mode
"loss_scale" # loss scale
'batch_size' # input batchsize
'epoch_size' # total epoch numbers
'num_classes' # dataset class numbers
'smooth_factor' # label smoothing factor
'aux_factor' # loss factor of aux logit
'lr_init' # initiate learning rate
'lr_max' # max bound of learning rate
'lr_end' # min bound of learning rate
'warmup_epochs' # warmup epoch numbers
'weight_decay' # weight decay
'momentum' # momentum
'opt_eps' # epsilon
'keep_checkpoint_max' # max numbers to keep checkpoints
'ckpt_path' # save checkpoint path
'is_save_on_master' # save checkpoint on rank0, distributed parameters
'dropout_keep_prob' # the keep rate, between 0 and 1, e.g. keep_prob = 0.9, means dropping out 10% of input units
'has_bias' # specifies whether the layer uses a bias vector.
'amp_level' # option for argument `level` in `mindspore.amp.build_train_network`, level for mixed
# precision training. Supports [O0, O2, O3].
Training process
Usage
You can start training using python or shell scripts. The usage of shell scripts as follows:
- Ascend:
# distribute training example(8p)
sh scripts/run_distribute_train.sh RANK_TABLE_FILE DATA_PATH
# standalone training
sh scripts/run_standalone_train.sh DEVICE_ID DATA_PATH
Notes: RANK_TABLE_FILE can refer to Link, and the device_ip can be got as Link. For large models like InceptionV3, it's better to export an external environment variable
export HCCL_CONNECT_TIMEOUT=600
to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size.This is processor cores binding operation regarding the
device_num
and total processor numbers. If you are not expect to do it, remove the operationstaskset
inscripts/run_distribute_train.sh
- GPU:
# distribute training example(8p)
sh scripts/run_distribute_train_gpu.sh DATA_DIR
# standalone training
sh scripts/run_standalone_train_gpu.sh DEVICE_ID DATA_DIR
Launch
# training example
python:
Ascend: python train.py --dataset_path /dataset/train --platform Ascend
GPU: python train.py --dataset_path /dataset/train --platform GPU
shell:
Ascend:
# distribute training example(8p)
sh scripts/run_distribute_train.sh RANK_TABLE_FILE DATA_PATH
# standalone training
sh scripts/run_standalone_train.sh DEVICE_ID DATA_PATH
GPU:
# distributed training example(8p)
sh scripts/run_distribute_train_gpu.sh /dataset/train
# standalone training example
sh scripts/run_standalone_train_gpu.sh 0 /dataset/train
Result
Training result will be stored in the example path. Checkpoints will be stored at . /checkpoint
by default, and training log will be redirected to ./log.txt
like followings.
epoch: 0 step: 1251, loss is 5.7787247
epoch time: 360760.985 ms, per step time: 288.378 ms
epoch: 1 step: 1251, loss is 4.392868
epoch time: 160917.911 ms, per step time: 128.631 ms
Eval process
Usage
You can start training using python or shell scripts. The usage of shell scripts as follows:
- Ascend:
sh scripts/run_eval.sh DEVICE_ID DATA_DIR PATH_CHECKPOINT
- GPU:
sh scripts/run_eval_gpu.sh DEVICE_ID DATA_DIR PATH_CHECKPOINT
Launch
# eval example
python:
Ascend: python eval.py --dataset_path DATA_DIR --checkpoint PATH_CHECKPOINT --platform Ascend
GPU: python eval.py --dataset_path DATA_DIR --checkpoint PATH_CHECKPOINT --platform GPU
shell:
Ascend: sh scripts/run_eval.sh DEVICE_ID DATA_DIR PATH_CHECKPOINT
GPU: sh scripts/run_eval_gpu.sh DEVICE_ID DATA_DIR PATH_CHECKPOINT
checkpoint can be produced in training process.
Result
Evaluation result will be stored in the example path, you can find result like the followings in eval.log
.
metric: {'Loss': 1.778, 'Top1-Acc':0.788, 'Top5-Acc':0.942}
Model description
Performance
Training Performance
Parameters | Ascend | GPU |
---|---|---|
Model Version | InceptionV3 | InceptionV3 |
Resource | Ascend 910, cpu:2.60GHz 192cores, memory:755G | NV SMI V100-16G(PCIE),cpu:2.10GHz 96cores, memory:250G |
uploaded Date | 08/21/2020 | 08/21/2020 |
MindSpore Version | 0.6.0-beta | 0.6.0-beta |
Dataset | 1200k images | 1200k images |
Batch_size | 128 | 128 |
Training Parameters | src/config.py | src/config.py |
Optimizer | RMSProp | RMSProp |
Loss Function | SoftmaxCrossEntropy | SoftmaxCrossEntropy |
Outputs | probability | probability |
Loss | 1.98 | 1.98 |
Accuracy (8p) | ACC1[78.8%] ACC5[94.2%] | ACC1[78.7%] ACC5[94.1%] |
Total time (8p) | 11h | 72h |
Params (M) | 103M | 103M |
Checkpoint for Fine tuning | 313M | 312M |
Scripts | inceptionv3 script | inceptionv3 script |
Inference Performance
Parameters | Ascend |
---|---|
Model Version | InceptionV3 |
Resource | Ascend 910, cpu:2.60GHz 192cores, memory:755G |
Uploaded Date | 08/22/2020 |
MindSpore Version | 0.6.0-beta |
Dataset | 50k images |
Batch_size | 128 |
Outputs | probability |
Accuracy | ACC1[78.8%] ACC5[94.2%] |
Total time | 2mins |
Model for inference | 92M (.onnx file) |
Description of Random Situation
In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py.
ModelZoo Homepage
Please check the official homepage.