forked from mindspore-Ecosystem/mindspore
!5243 1、add hub config for resnet 2、fix readme of deeplabv3 and inceptionv3 3、Extend hccl time out of resnext50
Merge pull request !5243 from zhouyaqiang0/master
This commit is contained in:
commit
eced8b32e9
|
@ -46,8 +46,8 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil
|
||||||
|
|
||||||
# [Environment Requirements](#contents)
|
# [Environment Requirements](#contents)
|
||||||
|
|
||||||
- Hardware(Ascend/GPU)
|
- Hardware(Ascend)
|
||||||
- Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
|
- Prepare hardware environment with Ascend. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
|
||||||
- Framework
|
- Framework
|
||||||
- [MindSpore](http://10.90.67.50/mindspore/archive/20200506/OpenSource/me_vm_x86/)
|
- [MindSpore](http://10.90.67.50/mindspore/archive/20200506/OpenSource/me_vm_x86/)
|
||||||
- For more information, please check the resources below:
|
- For more information, please check the resources below:
|
||||||
|
@ -60,32 +60,32 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
.
|
.
|
||||||
└─DeeplabV3
|
└─deeplabv3
|
||||||
│ README.md
|
├──README.md
|
||||||
│ eval.py
|
├──eval.py
|
||||||
│ train.py
|
├──train.py
|
||||||
├─scripts
|
├──scripts
|
||||||
│ run_distribute_train.sh # launch distributed training with ascend platform(8p)
|
│ ├──run_distribute_train.sh # launch distributed training with ascend platform(8p)
|
||||||
│ run_eval.sh # launch evaluating with ascend platform
|
│ ├──run_eval.sh # launch evaluating with ascend platform
|
||||||
│ run_standalone_train.sh # launch standalone training with ascend platform(1p)
|
│ ├──run_standalone_train.sh # launch standalone training with ascend platform(1p)
|
||||||
└─src
|
├──src
|
||||||
│ config.py # parameter configuration
|
│ ├──config.py # parameter configuration
|
||||||
│ deeplabv3.py # network definition
|
│ ├──deeplabv3.py # network definition
|
||||||
│ ei_dataset.py # data preprocessing for EI
|
│ ├──ei_dataset.py # data preprocessing for EI
|
||||||
│ losses.py # customized loss function
|
│ ├──losses.py # customized loss function
|
||||||
│ md_dataset.py # data preprocessing
|
│ ├──md_dataset.py # data preprocessing
|
||||||
│ miou_precision.py # miou metrics
|
│ ├──miou_precision.py # miou metrics
|
||||||
│ __init__.py
|
│ ├──__init__.py
|
||||||
│
|
│ │
|
||||||
├─backbone
|
│ ├──backbone
|
||||||
│ resnet_deeplab.py # backbone network definition
|
│ │ ├──resnet_deeplab.py # backbone network definition
|
||||||
│ __init__.py
|
│ │ ├──__init__.py
|
||||||
│
|
│ │
|
||||||
└─utils
|
│ └──utils
|
||||||
adapter.py # adapter of dataset
|
│ ├──adapter.py # adapter of dataset
|
||||||
custom_transforms.py # random process dataset
|
│ ├──custom_transforms.py # random process dataset
|
||||||
file_io.py # file operation module
|
│ ├──file_io.py # file operation module
|
||||||
__init__.py
|
│ ├──__init__.py
|
||||||
```
|
```
|
||||||
|
|
||||||
## [Script Parameters](#contents)
|
## [Script Parameters](#contents)
|
||||||
|
@ -107,7 +107,7 @@ Major parameters in train.py and config.py are:
|
||||||
to refine segmentation results, default is None.
|
to refine segmentation results, default is None.
|
||||||
image_pyramid Input scales for multi-scale feature extraction, default is None.
|
image_pyramid Input scales for multi-scale feature extraction, default is None.
|
||||||
epoch_size Epoch size, default is 6.
|
epoch_size Epoch size, default is 6.
|
||||||
batch_size batch size of input dataset: N, default is 2.
|
batch_size Batch size of input dataset: N, default is 2.
|
||||||
enable_save_ckpt Enable save checkpoint, default is true.
|
enable_save_ckpt Enable save checkpoint, default is true.
|
||||||
save_checkpoint_steps Save checkpoint steps, default is 1000.
|
save_checkpoint_steps Save checkpoint steps, default is 1000.
|
||||||
save_checkpoint_num Save checkpoint numbers, default is 1.
|
save_checkpoint_num Save checkpoint numbers, default is 1.
|
||||||
|
@ -123,7 +123,7 @@ You can start training using python or shell scripts. The usage of shell scripts
|
||||||
sh scripts/run_distribute_train.sh RANK_TABLE_FILE DATA_PATH (CKPT_PATH)
|
sh scripts/run_distribute_train.sh RANK_TABLE_FILE DATA_PATH (CKPT_PATH)
|
||||||
|
|
||||||
> Notes:
|
> Notes:
|
||||||
RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training_ascend.html) , and the device_ip can be got in /etc/hccn.conf in ascend server.
|
RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training_ascend.html) , and the device_ip can be got as https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools.
|
||||||
|
|
||||||
### Launch
|
### Launch
|
||||||
|
|
||||||
|
@ -133,7 +133,7 @@ sh scripts/run_distribute_train.sh RANK_TABLE_FILE DATA_PATH (CKPT_PATH)
|
||||||
python train.py --dataset_url DATA_PATH
|
python train.py --dataset_url DATA_PATH
|
||||||
|
|
||||||
shell:
|
shell:
|
||||||
sh scripts/run_distribute_train.sh RANK_TABLE_FILE DATA_PATH (CKPT_PATH)
|
sh scripts/run_standalone_train.sh DEVICE_ID DATA_PATH (CKPT_PATH)
|
||||||
```
|
```
|
||||||
> Notes:
|
> Notes:
|
||||||
If you are running a fine-tuning or evaluation task, prepare the corresponding checkpoint file.
|
If you are running a fine-tuning or evaluation task, prepare the corresponding checkpoint file.
|
||||||
|
@ -171,7 +171,7 @@ sh scripts/run_eval.sh DEVICE_ID DATA_PATH PRETRAINED_CKPT_PATH
|
||||||
|
|
||||||
### Result
|
### Result
|
||||||
|
|
||||||
Evaluation result will be stored in the example path, you can find result like the followings in `log.txt`.
|
Evaluation result will be stored in the example path, you can find result like the followings in `eval.log`.
|
||||||
|
|
||||||
```
|
```
|
||||||
mIoU = 0.65049
|
mIoU = 0.65049
|
||||||
|
@ -197,6 +197,7 @@ mIoU = 0.65049
|
||||||
| Total time | 5mins |
|
| Total time | 5mins |
|
||||||
| Params (M) | 94M |
|
| Params (M) | 94M |
|
||||||
| Checkpoint for Fine tuning | 100M |
|
| Checkpoint for Fine tuning | 100M |
|
||||||
|
| Scripts | [deeplabv3 script](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/deeplabv3) | [deeplabv3 script](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/deeplabv3) |
|
||||||
|
|
||||||
#### Inference Performance
|
#### Inference Performance
|
||||||
|
|
||||||
|
|
|
@ -49,10 +49,10 @@ do
|
||||||
end=`expr $start \+ $core_gap`
|
end=`expr $start \+ $core_gap`
|
||||||
cmdopt=$start"-"$end
|
cmdopt=$start"-"$end
|
||||||
|
|
||||||
rm -rf LOG$i
|
rm -rf train_parallel$i
|
||||||
mkdir ./LOG$i
|
mkdir ./train_parallel$i
|
||||||
cp *.py ./LOG$i
|
cp *.py ./train_parallel$i
|
||||||
cd ./LOG$i || exit
|
cd ./train_parallel$i || exit
|
||||||
echo "start training for rank $i, device $DEVICE_ID"
|
echo "start training for rank $i, device $DEVICE_ID"
|
||||||
mkdir -p ms_log
|
mkdir -p ms_log
|
||||||
CUR_DIR=`pwd`
|
CUR_DIR=`pwd`
|
||||||
|
|
|
@ -31,4 +31,4 @@ export GLOG_logtostderr=0
|
||||||
python eval.py \
|
python eval.py \
|
||||||
--device_id=$DEVICE_ID \
|
--device_id=$DEVICE_ID \
|
||||||
--checkpoint_url=$PATH_CHECKPOINT \
|
--checkpoint_url=$PATH_CHECKPOINT \
|
||||||
--data_url=$DATA_DIR > log.txt 2>&1 &
|
--data_url=$DATA_DIR > eval.log 2>&1 &
|
|
@ -20,7 +20,7 @@
|
||||||
|
|
||||||
# [InceptionV3 Description](#contents)
|
# [InceptionV3 Description](#contents)
|
||||||
|
|
||||||
InceptionV3 by Google is the 3rd version in a series of Deep Learning Convolutional Architectures.
|
InceptionV3 by Google is the 3rd version in a series of Deep Learning Convolutional Architectures. Inception v3 mainly focuses on burning less computational power by modifying the previous Inception architectures. This idea was proposed in the paper Rethinking the Inception Architecture for Computer Vision, published in 2015.
|
||||||
|
|
||||||
[Paper](https://arxiv.org/pdf/1512.00567.pdf) Min Sun, Ali Farhadi, Steve Seitz. Ranking Domain-Specific Highlights by Analyzing Edited Videos[J]. 2014.
|
[Paper](https://arxiv.org/pdf/1512.00567.pdf) Min Sun, Ali Farhadi, Steve Seitz. Ranking Domain-Specific Highlights by Analyzing Edited Videos[J]. 2014.
|
||||||
|
|
||||||
|
@ -36,8 +36,8 @@ The overall network architecture of InceptionV3 is show below:
|
||||||
Dataset used can refer to paper.
|
Dataset used can refer to paper.
|
||||||
|
|
||||||
- Dataset size: ~125G, 1.2W colorful images in 1000 classes
|
- Dataset size: ~125G, 1.2W colorful images in 1000 classes
|
||||||
- Train: 120G, 1.2W images
|
- Train: 120G, 1200k images
|
||||||
- Test: 5G, 50000 images
|
- Test: 5G, 50k images
|
||||||
- Data format: RGB images.
|
- Data format: RGB images.
|
||||||
- Note: Data will be processed in src/dataset.py
|
- Note: Data will be processed in src/dataset.py
|
||||||
|
|
||||||
|
@ -68,11 +68,11 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil
|
||||||
├─README.md
|
├─README.md
|
||||||
├─scripts
|
├─scripts
|
||||||
├─run_standalone_train.sh # launch standalone training with ascend platform(1p)
|
├─run_standalone_train.sh # launch standalone training with ascend platform(1p)
|
||||||
├─run_standalone_train_for_gpu.sh # launch standalone training with gpu platform(1p)
|
├─run_standalone_train_gpu.sh # launch standalone training with gpu platform(1p)
|
||||||
├─run_distribute_train.sh # launch distributed training with ascend platform(8p)
|
├─run_distribute_train.sh # launch distributed training with ascend platform(8p)
|
||||||
├─run_distribute_train_for_gpu.sh # launch distributed training with gpu platform(8p)
|
├─run_distribute_train_gpu.sh # launch distributed training with gpu platform(8p)
|
||||||
├─run_eval.sh # launch evaluating with ascend platform
|
├─run_eval.sh # launch evaluating with ascend platform
|
||||||
└─run_eval_for_gpu.sh # launch evaluating with gpu platform
|
└─run_eval_gpu.sh # launch evaluating with gpu platform
|
||||||
├─src
|
├─src
|
||||||
├─config.py # parameter configuration
|
├─config.py # parameter configuration
|
||||||
├─dataset.py # data preprocessing
|
├─dataset.py # data preprocessing
|
||||||
|
@ -88,27 +88,32 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil
|
||||||
|
|
||||||
```python
|
```python
|
||||||
Major parameters in train.py and config.py are:
|
Major parameters in train.py and config.py are:
|
||||||
'random_seed': 1, # fix random seed
|
'random_seed' # fix random seed
|
||||||
'rank': 0, # local rank of distributed
|
'rank' # local rank of distributed
|
||||||
'group_size': 1, # world size of distributed
|
'group_size' # world size of distributed
|
||||||
'work_nums': 8, # number of workers to read the data
|
'work_nums' # number of workers to read the data
|
||||||
'decay_method': 'cosine', # learning rate scheduler mode
|
'decay_method' # learning rate scheduler mode
|
||||||
"loss_scale": 1, # loss scale
|
"loss_scale" # loss scale
|
||||||
'batch_size': 128, # input batchsize
|
'batch_size' # input batchsize
|
||||||
'epoch_size': 250, # total epoch numbers
|
'epoch_size' # total epoch numbers
|
||||||
'num_classes': 1000, # dataset class numbers
|
'num_classes' # dataset class numbers
|
||||||
'smooth_factor': 0.1, # label smoothing factor
|
'smooth_factor' # label smoothing factor
|
||||||
'aux_factor': 0.2, # loss factor of aux logit
|
'aux_factor' # loss factor of aux logit
|
||||||
'lr_init': 0.00004, # initiate learning rate
|
'lr_init' # initiate learning rate
|
||||||
'lr_max': 0.4, # max bound of learning rate
|
'lr_max' # max bound of learning rate
|
||||||
'lr_end': 0.000004, # min bound of learning rate
|
'lr_end' # min bound of learning rate
|
||||||
'warmup_epochs': 1, # warmup epoch numbers
|
'warmup_epochs' # warmup epoch numbers
|
||||||
'weight_decay': 0.00004, # weight decay
|
'weight_decay' # weight decay
|
||||||
'momentum': 0.9, # momentum
|
'momentum' # momentum
|
||||||
'opt_eps': 1.0, # epsilon
|
'opt_eps' # epsilon
|
||||||
'keep_checkpoint_max': 100, # max numbers to keep checkpoints
|
'keep_checkpoint_max' # max numbers to keep checkpoints
|
||||||
'ckpt_path': './checkpoint/', # save checkpoint path
|
'ckpt_path' # save checkpoint path
|
||||||
'is_save_on_master': 1 # save checkpoint on rank0, distributed parameters
|
'is_save_on_master' # save checkpoint on rank0, distributed parameters
|
||||||
|
'dropout_keep_prob' # the keep rate, between 0 and 1, e.g. keep_prob = 0.9, means dropping out 10% of input units
|
||||||
|
'has_bias' # specifies whether the layer uses a bias vector.
|
||||||
|
'amp_level' # option for argument `level` in `mindspore.amp.build_train_network`, level for mixed
|
||||||
|
# precision training. Supports [O0, O2, O3].
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## [Training process](#contents)
|
## [Training process](#contents)
|
||||||
|
@ -125,13 +130,15 @@ sh run_distribute_train.sh RANK_TABLE_FILE DATA_PATH
|
||||||
# standalone training
|
# standalone training
|
||||||
sh run_standalone_train.sh DEVICE_ID DATA_PATH
|
sh run_standalone_train.sh DEVICE_ID DATA_PATH
|
||||||
```
|
```
|
||||||
|
> Notes:
|
||||||
|
RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training_ascend.html) , and the device_ip can be got as https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools.
|
||||||
|
|
||||||
- GPU:
|
- GPU:
|
||||||
```
|
```
|
||||||
# distribute training example(8p)
|
# distribute training example(8p)
|
||||||
sh run_distribute_train_for_gpu.sh DATA_DIR
|
sh run_distribute_train_gpu.sh DATA_DIR
|
||||||
# standalone training
|
# standalone training
|
||||||
sh run_standalone_train_for_gpu.sh DEVICE_ID DATA_DIR
|
sh run_standalone_train_gpu.sh DEVICE_ID DATA_DIR
|
||||||
```
|
```
|
||||||
|
|
||||||
### Launch
|
### Launch
|
||||||
|
@ -143,10 +150,16 @@ sh run_standalone_train_for_gpu.sh DEVICE_ID DATA_DIR
|
||||||
GPU: python train.py --dataset_path /dataset/train --platform GPU
|
GPU: python train.py --dataset_path /dataset/train --platform GPU
|
||||||
|
|
||||||
shell:
|
shell:
|
||||||
# distributed training example(8p) for GPU
|
Ascend:
|
||||||
sh scripts/run_distribute_train_for_gpu.sh /dataset/train
|
# distribute training example(8p)
|
||||||
# standalone training example for GPU
|
sh run_distribute_train.sh RANK_TABLE_FILE DATA_PATH
|
||||||
sh scripts/run_standalone_train_for_gpu.sh 0 /dataset/train
|
# standalone training
|
||||||
|
sh run_standalone_train.sh DEVICE_ID DATA_PATH
|
||||||
|
GPU:
|
||||||
|
# distributed training example(8p)
|
||||||
|
sh scripts/run_distribute_train_gpu.sh /dataset/train
|
||||||
|
# standalone training example
|
||||||
|
sh scripts/run_standalone_train_gpu.sh 0 /dataset/train
|
||||||
```
|
```
|
||||||
|
|
||||||
### Result
|
### Result
|
||||||
|
@ -166,7 +179,7 @@ Epoch time: 160917.911, per step time: 128.631
|
||||||
You can start training using python or shell scripts. The usage of shell scripts as follows:
|
You can start training using python or shell scripts. The usage of shell scripts as follows:
|
||||||
|
|
||||||
- Ascend: sh run_eval.sh DEVICE_ID DATA_DIR PATH_CHECKPOINT
|
- Ascend: sh run_eval.sh DEVICE_ID DATA_DIR PATH_CHECKPOINT
|
||||||
- GPU: sh run_eval_for_gpu.sh DEVICE_ID DATA_DIR PATH_CHECKPOINT
|
- GPU: sh run_eval_gpu.sh DEVICE_ID DATA_DIR PATH_CHECKPOINT
|
||||||
|
|
||||||
### Launch
|
### Launch
|
||||||
|
|
||||||
|
@ -178,14 +191,14 @@ You can start training using python or shell scripts. The usage of shell scripts
|
||||||
|
|
||||||
shell:
|
shell:
|
||||||
Ascend: sh run_eval.sh DEVICE_ID DATA_DIR PATH_CHECKPOINT
|
Ascend: sh run_eval.sh DEVICE_ID DATA_DIR PATH_CHECKPOINT
|
||||||
GPU: sh run_eval_for_gpu.sh DEVICE_ID DATA_DIR PATH_CHECKPOINT
|
GPU: sh run_eval_gpu.sh DEVICE_ID DATA_DIR PATH_CHECKPOINT
|
||||||
```
|
```
|
||||||
|
|
||||||
> checkpoint can be produced in training process.
|
> checkpoint can be produced in training process.
|
||||||
|
|
||||||
### Result
|
### Result
|
||||||
|
|
||||||
Evaluation result will be stored in the example path, you can find result like the followings in `log.txt`.
|
Evaluation result will be stored in the example path, you can find result like the followings in `eval.log`.
|
||||||
|
|
||||||
```
|
```
|
||||||
metric: {'Loss': 1.778, 'Top1-Acc':0.788, 'Top5-Acc':0.942}
|
metric: {'Loss': 1.778, 'Top1-Acc':0.788, 'Top5-Acc':0.942}
|
||||||
|
@ -198,9 +211,9 @@ metric: {'Loss': 1.778, 'Top1-Acc':0.788, 'Top5-Acc':0.942}
|
||||||
### Training Performance
|
### Training Performance
|
||||||
|
|
||||||
| Parameters | InceptionV3 | |
|
| Parameters | InceptionV3 | |
|
||||||
| -------------------------- | ---------------------------------------------------------- | ------------------------- |
|
| -------------------------- | ---------------------------------------------- | ------------------------- |
|
||||||
| Model Version | | |
|
| Model Version | | |
|
||||||
| Resource | Ascend 910, cpu:2.60GHz 56cores, memory:314G | NV SMX2 V100-32G |
|
| Resource | Ascend 910, cpu:2.60GHz 56cores, memory:314G | NV SMI V100-16G(PCIE),cpu:2.10GHz 96cores, memory:250G |
|
||||||
| uploaded Date | 08/21/2020 | 08/21/2020 |
|
| uploaded Date | 08/21/2020 | 08/21/2020 |
|
||||||
| MindSpore Version | 0.6.0-beta | 0.6.0-beta |
|
| MindSpore Version | 0.6.0-beta | 0.6.0-beta |
|
||||||
| Training Parameters | src/config.py | src/config.py |
|
| Training Parameters | src/config.py | src/config.py |
|
||||||
|
@ -208,10 +221,12 @@ metric: {'Loss': 1.778, 'Top1-Acc':0.788, 'Top5-Acc':0.942}
|
||||||
| Loss Function | SoftmaxCrossEntropy | SoftmaxCrossEntropy |
|
| Loss Function | SoftmaxCrossEntropy | SoftmaxCrossEntropy |
|
||||||
| outputs | probability | probability |
|
| outputs | probability | probability |
|
||||||
| Loss | 1.98 | 1.98 |
|
| Loss | 1.98 | 1.98 |
|
||||||
| Accuracy | ACC1[78.8%] ACC5[94.2%] | ACC1[78.7%] ACC5[94.1%] |
|
| Accuracy (8p) | ACC1[78.8%] ACC5[94.2%] | ACC1[78.7%] ACC5[94.1%] |
|
||||||
| Total time | 11h | 72h |
|
| Total time (8p) | 11h | 72h |
|
||||||
| Params (M) | 103M | 103M |
|
| Params (M) | 103M | 103M |
|
||||||
| Checkpoint for Fine tuning | 313M | 312.41M |
|
| Checkpoint for Fine tuning | 313M | 312M |
|
||||||
|
| Scripts | [inceptionv3 script](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/inceptionv3) | [inceptionv3 script](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/inceptionv3) |
|
||||||
|
|
||||||
|
|
||||||
#### Inference Performance
|
#### Inference Performance
|
||||||
|
|
||||||
|
@ -221,7 +236,7 @@ metric: {'Loss': 1.778, 'Top1-Acc':0.788, 'Top5-Acc':0.942}
|
||||||
| Resource | Ascend 910 |
|
| Resource | Ascend 910 |
|
||||||
| Uploaded Date | 08/22/2020 (month/day/year) |
|
| Uploaded Date | 08/22/2020 (month/day/year) |
|
||||||
| MindSpore Version | 0.6.0-beta |
|
| MindSpore Version | 0.6.0-beta |
|
||||||
| Dataset | 50,000 images |
|
| Dataset | 50k images |
|
||||||
| batch_size | 128 |
|
| batch_size | 128 |
|
||||||
| outputs | probability |
|
| outputs | probability |
|
||||||
| Accuracy | ACC1[78.8%] ACC5[94.2%] |
|
| Accuracy | ACC1[78.8%] ACC5[94.2%] |
|
||||||
|
|
|
@ -35,10 +35,10 @@ do
|
||||||
end=`expr $start \+ $core_gap`
|
end=`expr $start \+ $core_gap`
|
||||||
cmdopt=$start"-"$end
|
cmdopt=$start"-"$end
|
||||||
|
|
||||||
rm -rf LOG$i
|
rm -rf train_parallel$i
|
||||||
mkdir ./LOG$i
|
mkdir ./train_parallel$i
|
||||||
cp *.py ./LOG$i
|
cp *.py ./train_parallel$i
|
||||||
cd ./LOG$i || exit
|
cd ./train_parallel$i || exit
|
||||||
echo "start training for rank $i, device $DEVICE_ID"
|
echo "start training for rank $i, device $DEVICE_ID"
|
||||||
|
|
||||||
env > env.log
|
env > env.log
|
||||||
|
|
|
@ -21,4 +21,4 @@ PATH_CHECKPOINT=$3
|
||||||
python eval.py \
|
python eval.py \
|
||||||
--platform=Ascend \
|
--platform=Ascend \
|
||||||
--checkpoint=$PATH_CHECKPOINT \
|
--checkpoint=$PATH_CHECKPOINT \
|
||||||
--dataset_path=$DATA_DIR > log.txt 2>&1 &
|
--dataset_path=$DATA_DIR > eval.log 2>&1 &
|
||||||
|
|
|
@ -37,7 +37,7 @@ config_gpu = edict({
|
||||||
'weight_decay': 0.00004,
|
'weight_decay': 0.00004,
|
||||||
'momentum': 0.9,
|
'momentum': 0.9,
|
||||||
'opt_eps': 1.0,
|
'opt_eps': 1.0,
|
||||||
'keep_checkpoint_max': 100,
|
'keep_checkpoint_max': 10,
|
||||||
'ckpt_path': './checkpoint/',
|
'ckpt_path': './checkpoint/',
|
||||||
'is_save_on_master': 0,
|
'is_save_on_master': 0,
|
||||||
'dropout_keep_prob': 0.5,
|
'dropout_keep_prob': 0.5,
|
||||||
|
|
|
@ -0,0 +1,25 @@
|
||||||
|
# Copyright 2020 Huawei Technologies Co., Ltd
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
# ============================================================================
|
||||||
|
"""hub config."""
|
||||||
|
from src.resnet import resnet50, resnet101, se_resnet50
|
||||||
|
|
||||||
|
def create_network(name, **kwargs):
|
||||||
|
if name == 'resnet50':
|
||||||
|
return resnet50(*args, **kwargs)
|
||||||
|
if name == 'resnet101':
|
||||||
|
return resnet101(*args, **kwargs)
|
||||||
|
if name == 'se_resnet50':
|
||||||
|
return se_resnet50(*args, **kwargs)
|
||||||
|
raise NotImplementedError(f"{name} is not implemented in the repo")
|
|
@ -17,6 +17,8 @@
|
||||||
DATA_DIR=$2
|
DATA_DIR=$2
|
||||||
export RANK_TABLE_FILE=$1
|
export RANK_TABLE_FILE=$1
|
||||||
export RANK_SIZE=8
|
export RANK_SIZE=8
|
||||||
|
export HCCL_CONNECT_TIMEOUT=600
|
||||||
|
echo "hccl connect time out has changed to 600 second"
|
||||||
PATH_CHECKPOINT=""
|
PATH_CHECKPOINT=""
|
||||||
if [ $# == 3 ]
|
if [ $# == 3 ]
|
||||||
then
|
then
|
||||||
|
|
Loading…
Reference in New Issue