centernet_det can use on modelarts and infer on ascend310
This commit is contained in:
parent
53366ce732
commit
277db549b0
|
@ -9,14 +9,18 @@
|
|||
- [Script and Sample Code](#script-and-sample-code)
|
||||
- [Script Parameters](#script-parameters)
|
||||
- [Training Process](#training-process)
|
||||
- [Training](#training)
|
||||
- [Distributed Training](#distributed-training)
|
||||
- [Testing Process](#testing-process)
|
||||
- [Testing and Evaluation](#testing-and-evaluation)
|
||||
- [Inference Process](#inference-process)
|
||||
- [Convert](#convert)
|
||||
- [Infer on Ascend310](#infer-on-Ascend310)
|
||||
- [Result](#result)
|
||||
- [Model Description](#model-description)
|
||||
- [Performance](#performance)
|
||||
- [Training Performance](#training-performance)
|
||||
- [Inference Performance](#inference-performance)
|
||||
- [Training Performance On Ascend 910](#training-performance-on-ascend-910)
|
||||
- [Inference Performance On Ascend 910](#inference-performance-on-ascend-910)
|
||||
- [Inference Performance On Ascend 310](#inference-performance-on-ascend-310)
|
||||
- [ModelZoo Homepage](#modelzoo-homepage)
|
||||
|
||||
# [CenterNet Description](#contents)
|
||||
|
@ -38,7 +42,7 @@ Note that you can run the scripts based on the dataset mentioned in original pap
|
|||
Dataset used: [COCO2017](https://cocodataset.org/)
|
||||
|
||||
- Dataset size:26G
|
||||
- Train:19G,118000 images
|
||||
- Train:19G,118000 images
|
||||
- Val:0.8G,5000 images
|
||||
- Test: 6.3G, 40000 images
|
||||
- Annotations:808M,instances,captions etc
|
||||
|
@ -76,12 +80,14 @@ Dataset used: [COCO2017](https://cocodataset.org/)
|
|||
# [Environment Requirements](#contents)
|
||||
|
||||
- Hardware(Ascend)
|
||||
|
||||
- Prepare hardware environment with Ascend processor.
|
||||
- Framework
|
||||
|
||||
- [MindSpore](https://www.mindspore.cn/install/en)
|
||||
- For more information, please check the resources below:
|
||||
- [MindSpore tutorials](https://www.mindspore.cn/tutorials/en/master/index.html)
|
||||
- [MindSpore Python API](https://www.mindspore.cn/docs/api/en/master/index.html)
|
||||
- [MindSpore tutorials](https://www.mindspore.cn/tutorials/zh-CN/master/index.html)
|
||||
- [MindSpore Python API](https://www.mindspore.cn/docs/api/zh-CN/master/index.html)
|
||||
- Download the dataset COCO2017.
|
||||
- We use COCO2017 as training dataset in this example by default, and you can also use your own datasets.
|
||||
|
||||
|
@ -114,6 +120,105 @@ Dataset used: [COCO2017](https://cocodataset.org/)
|
|||
|
||||
# [Quick Start](#contents)
|
||||
|
||||
- running on local
|
||||
|
||||
After installing MindSpore via the official website, you can start training and evaluation as follows:
|
||||
|
||||
Note:
|
||||
1.the first run of training will generate the mindrecord file, which will take a long time.
|
||||
2.MINDRECORD_DATASET_PATH is the mindrecord dataset directory.
|
||||
3.For `train.py`, LOAD_CHECKPOINT_PATH is the pretrained checkpoint file directory, if no just set "".
|
||||
4.For `eval.py`, LOAD_CHECKPOINT_PATH is the checkpoint to be evaluated.
|
||||
5.RUN_MODE support validation and testing, set to be "val"/"test"
|
||||
|
||||
```shell
|
||||
# create dataset in mindrecord format
|
||||
bash scripts/convert_dataset_to_mindrecord.sh [COCO_DATASET_DIR] [MINDRECORD_DATASET_DIR]
|
||||
|
||||
# standalone training on Ascend
|
||||
bash scripts/run_standalone_train_ascend.sh [DEVICE_ID] [MINDRECORD_DATASET_PATH] [LOAD_CHECKPOINT_PATH](optional)
|
||||
|
||||
# distributed training on Ascend
|
||||
bash scripts/run_distributed_train_ascend.sh [MINDRECORD_DATASET_PATH] [RANK_TABLE_FILE] [LOAD_CHECKPOINT_PATH](optional)
|
||||
|
||||
# eval on Ascend
|
||||
bash scripts/run_standalone_eval_ascend.sh [DEVICE_ID] [RUN_MODE] [DATA_DIR] [LOAD_CHECKPOINT_PATH]
|
||||
```
|
||||
|
||||
- running on ModelArts
|
||||
|
||||
If you want to run in modelarts, please check the official documentation of modelarts, and you can start training as follows
|
||||
|
||||
- Creating mindrecord dataset with single cards on ModelArts
|
||||
|
||||
```text
|
||||
# (1) Upload the code folder to S3 bucket.
|
||||
# (2) Upload the COCO2017 dataset to S3 bucket.
|
||||
# (2) Click to "create task" on the website UI interface.
|
||||
# (3) Set the code directory to "/{path}/centernet_det" on the website UI interface.
|
||||
# (4) Set the startup file to /{path}/centernet_det/dataset.py" on the website UI interface.
|
||||
# (5) Perform a or b.
|
||||
# a. setting parameters in /{path}/centernet_det/default_config.yaml.
|
||||
# 1. Set ”enable_modelarts: True“
|
||||
# b. adding on the website UI interface.
|
||||
# 1. Add ”enable_modelarts=True“
|
||||
# (7) Check the "data storage location" on the website UI interface and set the "Dataset path" path.
|
||||
# (8) Set the "Output file path" and "Job log path" to your path on the website UI interface.
|
||||
# (9) Under the item "resource pool selection", select the specification of single cards.
|
||||
# (10) Create your job.
|
||||
```
|
||||
|
||||
- Training with single cards on ModelArts
|
||||
|
||||
```text
|
||||
# (1) Upload the code folder to S3 bucket.
|
||||
# (2) Click to "create task" on the website UI interface.
|
||||
# (3) Set the code directory to "/{path}/centernet_det" on the website UI interface.
|
||||
# (4) Set the startup file to /{path}/centernet_det/train.py" on the website UI interface.
|
||||
# (5) Perform a or b.
|
||||
# a. setting parameters in /{path}/centernet_det/default_config.yaml.
|
||||
# 1. Set ”enable_modelarts: True“
|
||||
# 2. Set “epoch_size: 130”
|
||||
# 3. Set “distribute: 'true'”
|
||||
# 4. Set “save_checkpoint_path: ./checkpoints”
|
||||
# b. adding on the website UI interface.
|
||||
# 1. Add ”enable_modelarts=True“
|
||||
# 2. Add “epoch_size=130”
|
||||
# 3. Add “distribute=true”
|
||||
# 4. Add “save_checkpoint_path=./checkpoints”
|
||||
# (6) Upload the mindrecord dataset to S3 bucket.
|
||||
# (7) Check the "data storage location" on the website UI interface and set the "Dataset path" path.
|
||||
# (8) Set the "Output file path" and "Job log path" to your path on the website UI interface.
|
||||
# (9) Under the item "resource pool selection", select the specification of single cards.
|
||||
# (10) Create your job.
|
||||
```
|
||||
|
||||
- evaluating with single card on ModelArts
|
||||
|
||||
```text
|
||||
# (1) Upload the code folder to S3 bucket.
|
||||
# (2) Git clone https://github.com/xingyizhou/CenterNet.git on local, and put the folder 'CenterNet' under the folder 'centernet' on s3 bucket.
|
||||
# (3) Click to "create task" on the website UI interface.
|
||||
# (4) Set the code directory to "/{path}/centernet_det" on the website UI interface.
|
||||
# (5) Set the startup file to /{path}/centernet_det/eval.py" on the website UI interface.
|
||||
# (6) Perform a or b.
|
||||
# a. setting parameters in /{path}/centernet_det/default_config.yaml.
|
||||
# 1. Set ”enable_modelarts: True“
|
||||
# 2. Set “run_mode: 'val'”
|
||||
# 3. Set "load_checkpoint_path='/cache/checkpoint_path/model.ckpt'" on yaml file.
|
||||
# 4. Set "checkpoint_url=/The path of checkpoint in S3/" on yaml file.
|
||||
# b. adding on the website UI interface.
|
||||
# 1. Add ”enable_modelarts=True“
|
||||
# 2. Add “run_mode=val”
|
||||
# 3. Add "load_checkpoint_path='/cache/checkpoint_path/model.ckpt'" on the website UI interface.
|
||||
# 4. Add "checkpoint_url=/The path of checkpoint in S3/" on the website UI interface.
|
||||
# (7) Upload the dataset(not mindrecord format) to S3 bucket.
|
||||
# (8) Check the "data storage location" on the website UI interface and set the "Dataset path" path.
|
||||
# (9) Set the "Output file path" and "Job log path" to your path on the website UI interface.
|
||||
# (10) Under the item "resource pool selection", select the specification of a single card.
|
||||
# (11) Create your job.
|
||||
```
|
||||
|
||||
After installing MindSpore via the official website, you can start training and evaluation as follows:
|
||||
|
||||
Note: 1.the first run of training will generate the mindrecord file, which will take a long time.
|
||||
|
@ -121,20 +226,6 @@ Note: 1.the first run of training will generate the mindrecord file, which will
|
|||
3.LOAD_CHECKPOINT_PATH is the pretrained checkpoint file directory, if no just set ""
|
||||
4.RUN_MODE support validation and testing, set to be "val"/"test"
|
||||
|
||||
```shell
|
||||
# create dataset in mindrecord format
|
||||
bash scripts/convert_dataset_to_mindrecord.sh [COCO_DATASET_DIR] [MINDRECORD_DATASET_DIR]
|
||||
|
||||
# standalone training on Ascend
|
||||
bash scripts/run_standalone_train_ascend.sh [DEVICE_ID] [MINDRECORD_DATASET_PATH] [LOAD_CHECKPOINT_PATH](optional)
|
||||
|
||||
# distributed training on Ascend
|
||||
bash scripts/run_distributed_train_ascend.sh [MINDRECORD_DATASET_PATH] [RANK_TABLE_FILE] [LOAD_CHECKPOINT_PATH](optional)
|
||||
|
||||
# eval on Ascend
|
||||
bash scripts/run_standalone_eval_ascend.sh [DEVICE_ID] [RUN_MODE] [DATA_DIR] [LOAD_CHECKPOINT_PATH]
|
||||
```
|
||||
|
||||
# [Script Description](#contents)
|
||||
|
||||
## [Script and Sample Code](#contents)
|
||||
|
@ -145,29 +236,38 @@ bash scripts/run_standalone_eval_ascend.sh [DEVICE_ID] [RUN_MODE] [DATA_DIR] [LO
|
|||
├── centernet_det
|
||||
├── train.py // training scripts
|
||||
├── eval.py // testing and evaluation outputs
|
||||
├── README.md // descriptions about CenterNet
|
||||
├── export.py // convert mindspore model to mindir model
|
||||
├── README.md // descriptions about centernet_det
|
||||
├── default_config.yaml // parameter configuration
|
||||
├── ascend310_infer // application for 310 inference
|
||||
├── preprocess.py // preprocess scripts
|
||||
├── postprocess.py // postprocess scripts
|
||||
├── scripts
|
||||
│ ├── ascend_distributed_launcher
|
||||
│ │ ├──__init__.py
|
||||
│ │ ├──hyper_parameter_config.ini // hyper parameter for distributed training
|
||||
│ │ ├──get_distribute_train_cmd.py // script for distributed training
|
||||
│ │ ├──README.md
|
||||
│ ├──convert_dataset_to_mindrecord.sh // shell script for converting coco type dataset to mindrecord
|
||||
│ ├──run_standalone_train_ascend.sh // shell script for standalone training on ascend
|
||||
│ ├──run_distributed_train_ascend.sh // shell script for distributed training on ascend
|
||||
│ ├──run_standalone_eval_ascend.sh // shell script for standalone evaluation on ascend
|
||||
│ │ ├── __init__.py
|
||||
│ │ ├── hyper_parameter_config.ini // hyper parameter for distributed training
|
||||
│ │ ├── get_distribute_train_cmd.py // script for distributed training
|
||||
│ │ ├── README.md
|
||||
│ ├── convert_dataset_to_mindrecord.sh // shell script for converting coco type dataset to mindrecord
|
||||
│ ├── run_standalone_train_ascend.sh // shell script for standalone training on ascend
|
||||
│ ├── run_infer_310.sh // shell script for 310 inference on ascend
|
||||
│ ├── run_distributed_train_ascend.sh // shell script for distributed training on ascend
|
||||
│ ├── run_standalone_eval_ascend.sh // shell script for standalone evaluation on ascend
|
||||
└── src
|
||||
├──__init__.py
|
||||
├──centernet_det.py // centernet networks, training entry
|
||||
├──dataset.py // generate dataloader and data processing entry
|
||||
├──config.py // centernet unique configs
|
||||
├──hccl_tools.py // generate hccl configuration
|
||||
├──decode.py // decode the head features
|
||||
├──hourglass.py // hourglass backbone
|
||||
├──utils.py // auxiliary functions for train, to log and preload
|
||||
├──image.py // image preprocess functions
|
||||
├──post_process.py // post-process functions after decode in inference
|
||||
└──visual.py // visualization image, bbox, score and keypoints
|
||||
├── model_utils
|
||||
│ ├── config.py // parsing parameter configuration file of "*.yaml"
|
||||
│ ├── device_adapter.py // local or ModelArts training
|
||||
│ ├── local_adapter.py // get related environment variables on local
|
||||
│ └── moxing_adapter.py // get related environment variables abd transfer data on ModelArts
|
||||
├── __init__.py
|
||||
├── centernet_det.py // centernet networks, training entry
|
||||
├── dataset.py // generate dataloader and data processing entry
|
||||
├── decode.py // decode the head features
|
||||
├── hourglass.py // hourglass backbone
|
||||
├── image.py // image preprocess functions
|
||||
├── post_process.py // post-process functions after decode in inference
|
||||
├── utils.py // auxiliary functions for train, to log and preload
|
||||
└── visual.py // visualization image, bbox, score and keypoints
|
||||
```
|
||||
|
||||
## [Script Parameters](#contents)
|
||||
|
@ -202,7 +302,7 @@ usage: train.py [--device_target DEVICE_TARGET] [--distribute DISTRIBUTE]
|
|||
[--save_result_dir SAVE_RESULT_DIR]
|
||||
|
||||
options:
|
||||
--device_target device where the code will be implemented: "Ascend" | "CPU", default is "Ascend"
|
||||
--device_target device where the code will be implemented: "Ascend"
|
||||
--distribute training by several devices: "true"(training by more than 1 device) | "false", default is "true"
|
||||
--need profiler whether to use the profiling tools: "true" | "false", default is "false"
|
||||
--profiler_path path to save the profiling results: PATH, default is ""
|
||||
|
@ -233,7 +333,7 @@ usage: eval.py [--device_target DEVICE_TARGET] [--device_id N]
|
|||
[--visual_image VISUAL_IMAGE]
|
||||
[--enable_eval ENABLE_EVAL] [--save_result_dir SAVE_RESULT_DIR]
|
||||
options:
|
||||
--device_target device where the code will be implemented: "Ascend" | "CPU", default is "Ascend"
|
||||
--device_target device where the code will be implemented: "Ascend"
|
||||
--device_id device id to run task, default is 0
|
||||
--load_checkpoint_path initial checkpoint (usually from a pre-trained CenterNet model): PATH, default is ""
|
||||
--data_dir validation or test dataset dir: PATH, default is ""
|
||||
|
@ -249,21 +349,20 @@ Parameters for training and evaluation can be set in file `config.py`.
|
|||
#### Options
|
||||
|
||||
```text
|
||||
config for training.
|
||||
batch_size batch size of input dataset: N, default is 12
|
||||
loss_scale_value initial value of loss scale: N, default is 1024
|
||||
optimizer optimizer used in the network: Adam, default is Adam
|
||||
lr_schedule schedules to get the learning rate
|
||||
train_config.
|
||||
batch_size: 12 // batch size of input dataset: N, default is 12
|
||||
loss_scale_value: 1024 // initial value of loss scale: N, default is 1024
|
||||
optimizer: 'Adam' // optimizer used in the network: Adam, default is Adam
|
||||
lr_schedule: 'MultiDecay' // schedules to get the learning rate
|
||||
```
|
||||
|
||||
```text
|
||||
config for evaluation.
|
||||
SOFT_NMS nms after decode: True | False, default is True
|
||||
keep_res keep original or fix resolution: True | False, default is True
|
||||
multi_scales use multi-scales of image: List, default is [1.0]
|
||||
pad pad size when keep original resolution, default is 127
|
||||
K number of bboxes to be computed by TopK, default is 100
|
||||
score_thresh threshold of score when visualize image and annotation info,default is 0.4
|
||||
SOFT_NMS: True // nms after decode: True | False, default is True
|
||||
keep_res: True // keep original or fix resolution: True | False, default is True
|
||||
multi_scales: [1.0] // use multi-scales of image: List, default is [1.0]
|
||||
K: 100 // number of bboxes to be computed by TopK, default is 100
|
||||
score_thresh: 0.3 // threshold of score when visualize image and annotation info,default is 0.3
|
||||
```
|
||||
|
||||
#### Parameters
|
||||
|
@ -282,7 +381,6 @@ Parameters for dataset (Training/Evaluation):
|
|||
flip_prop properbility of image flip during data augmenation: N, default is 0.5
|
||||
color_aug color augmentation of RGB image, default is True
|
||||
coco_classes name of categories in COCO2017
|
||||
coco_class_name2id ID corresponding to the categories in COCO2017
|
||||
mean mean value of RGB image
|
||||
std variance of RGB image
|
||||
eig_vec eigenvectors of RGB image
|
||||
|
@ -290,8 +388,7 @@ Parameters for dataset (Training/Evaluation):
|
|||
|
||||
Parameters for network (Training/Evaluation):
|
||||
down_ratio the ratio of input and output resolution during training,default is 4
|
||||
last_level the last level in final upsampling, default is 6
|
||||
num_stacks the number of stacked hourglass network, default is 2
|
||||
num_stacks the number of stacked hourglass network, default is 2
|
||||
n the number of stacked hourglass modules, default is 5
|
||||
heads the number of heatmap,width and height,offset, default is {'hm': 80, 'wh': 2, 'reg': 2}
|
||||
cnv_dim the convolution of dimension, default is 256
|
||||
|
@ -309,7 +406,7 @@ Parameters for network (Training/Evaluation):
|
|||
|
||||
Parameters for optimizer and learning rate:
|
||||
Adam:
|
||||
weight_decay weight decay: Q,default is 0.0
|
||||
weight_decay weight decay: Q
|
||||
decay_filer lamda expression to specify which param will be decayed
|
||||
|
||||
PolyDecay:
|
||||
|
@ -331,7 +428,7 @@ Parameters for optimizer and learning rate:
|
|||
|
||||
Before your first training, convert coco type dataset to mindrecord files is needed to improve performance on host.
|
||||
|
||||
```bash
|
||||
```shell
|
||||
bash scripts/convert_dataset_to_mindrecord.sh /path/coco_dataset_dir /path/mindrecord_dataset_dir
|
||||
```
|
||||
|
||||
|
@ -341,13 +438,13 @@ The command above will run in the background, after converting mindrecord files
|
|||
|
||||
#### Running on Ascend
|
||||
|
||||
```bash
|
||||
```shell
|
||||
bash scripts/run_distributed_train_ascend.sh /path/mindrecord_dataset /path/hccl.json /path/load_ckpt(optional)
|
||||
```
|
||||
|
||||
The command above will run in the background, you can view training logs in LOG*/training_log.txt and LOG*/ms_log/. After training finished, you will get some checkpoint files under the LOG*/ckpt_0 folder by default. The loss value will be displayed as follows:
|
||||
|
||||
```bash
|
||||
```text
|
||||
# grep "epoch" training_log.txt
|
||||
epoch: 128, current epoch percent: 1.000, step: 157509, outputs are (Tensor(shape=[], dtype=Float32, value= 1.54529), Tensor(shape=[], dtype=Bool, value= False), Tensor(shape=[], dtype=Float32, value= 1024))
|
||||
epoch time: 1211875.286 ms, per step time: 992.527 ms
|
||||
|
@ -360,13 +457,10 @@ epoch time: 1214703.313 ms, per step time: 994.843 ms
|
|||
|
||||
### Testing and Evaluation
|
||||
|
||||
```bash
|
||||
```shell
|
||||
# Evaluation base on validation dataset will be done automatically, while for test or test-dev dataset, the accuracy should be upload to the CodaLab official website(https://competitions.codalab.org).
|
||||
# On Ascend
|
||||
bash scripts/run_standalone_eval_ascend.sh device_id val(or test) /path/coco_dataset /path/load_ckpt
|
||||
|
||||
# On CPU
|
||||
bash scripts/run_standalone_eval_cpu.sh val(or test) /path/coco_dataset /path/load_ckpt
|
||||
```
|
||||
|
||||
you can see the MAP result below as below:
|
||||
|
@ -387,30 +481,89 @@ overall performance on coco2017 validation dataset
|
|||
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.764
|
||||
```
|
||||
|
||||
## [Convert Process](#contents)
|
||||
## [Inference Process](#contents)
|
||||
|
||||
### Convert
|
||||
|
||||
If you want to infer the network on Ascend 310, you should convert the model to AIR:
|
||||
If you want to infer the network on Ascend 310, you should convert the model to MINDIR:
|
||||
|
||||
```python
|
||||
python export.py [DEVICE_ID]
|
||||
- Export on local
|
||||
|
||||
```text
|
||||
python export.py --device_id [DEVICE_ID] --export_format MINDIR --export_load_ckpt [CKPT_FILE__PATH] --export_name [EXPORT_FILE_NAME]
|
||||
```
|
||||
|
||||
- Export on ModelArts (If you want to run in modelarts, please check the official documentation of [modelarts](https://support.huaweicloud.com/modelarts/), and you can start as follows)
|
||||
|
||||
```text
|
||||
# (1) Upload the code folder to S3 bucket.
|
||||
# (2) Click to "create training task" on the website UI interface.
|
||||
# (3) Set the code directory to "/{path}/centernet_det" on the website UI interface.
|
||||
# (4) Set the startup file to /{path}/centernet_det/export.py" on the website UI interface.
|
||||
# (5) Perform a or b.
|
||||
# a. setting parameters in /{path}/centernet_det/default_config.yaml.
|
||||
# 1. Set ”enable_modelarts: True“
|
||||
# 2. Set “export_load_ckpt: ./{path}/*.ckpt”('export_load_ckpt' indicates the path of the weight file to be exported relative to the file `export.py`, and the weight file must be included in the code directory.)
|
||||
# 3. Set ”export_name: centernet_det“
|
||||
# 4. Set ”export_format:MINDIR“
|
||||
# b. adding on the website UI interface.
|
||||
# 1. Add ”enable_modelarts=True“
|
||||
# 2. Add “export_load_ckpt=./{path}/*.ckpt”('export_load_ckpt' indicates the path of the weight file to be exported relative to the file `export.py`, and the weight file must be included in the code directory.)
|
||||
# 3. Add ”export_name=centernet_det“
|
||||
# 4. Add ”export_format=MINDIR“
|
||||
# (7) Check the "data storage location" on the website UI interface and set the "Dataset path" path (This step is useless, but necessary.).
|
||||
# (8) Set the "Output file path" and "Job log path" to your path on the website UI interface.
|
||||
# (9) Under the item "resource pool selection", select the specification of a single card.
|
||||
# (10) Create your job.
|
||||
# You will see centernet.mindir under {Output file path}.
|
||||
```
|
||||
|
||||
### Infer on Ascend310
|
||||
|
||||
Before performing inference, the mindir file must be exported by export.py script. We only provide an example of inference using MINDIR model. Current batch_size can only be set to 1.
|
||||
|
||||
```shell
|
||||
#Ascend310 inference
|
||||
bash run_infer_310.sh [MINDIR_PATH] [DATASET_PATH] [PREPROCESS_IMAGES] [DEVICE_ID]
|
||||
```
|
||||
|
||||
- `PREPROCESS_IMAGES` Weather need preprocess or not, it's value must be in [y, n]
|
||||
|
||||
### Result
|
||||
|
||||
Inference result is saved in current path, you can find result like this in acc.log file.Since the input images are fixed shape on Ascend 310, all accuracy will be lower than that on Ascend 910.
|
||||
|
||||
```log
|
||||
#acc.log
|
||||
=============coco2017 310 infer reulst=========
|
||||
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.410
|
||||
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.600
|
||||
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.440
|
||||
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.213
|
||||
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.437
|
||||
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.567
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.339
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.543
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.572
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.342
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.620
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.764
|
||||
```
|
||||
|
||||
# [Model Description](#contents)
|
||||
|
||||
## [Performance](#contents)
|
||||
|
||||
### Training Performance On Ascend
|
||||
### Training Performance On Ascend 910
|
||||
|
||||
CenterNet on 11.8K images(The annotation and data format must be the same as coco)
|
||||
|
||||
| Parameters | CenterNet |
|
||||
| Parameters | CenterNet_Hourglass |
|
||||
| -------------------------- | ---------------------------------------------------------------|
|
||||
| Resource | Ascend 910; CPU 2.60GHz, 192cores; Memory 755G; OS Euler2.8 |
|
||||
| Resource | Ascend 910; CPU 2.60GHz, 192cores; Memory, 755G |
|
||||
| uploaded Date | 3/27/2021 (month/day/year) |
|
||||
| MindSpore Version | 1.1.0 |
|
||||
| Dataset | 11.8K images |
|
||||
| Dataset | COCO2017 |
|
||||
| Training Parameters | 8p, epoch=130, steps=158730, batch_size = 12, lr=2.4e-4 |
|
||||
| Optimizer | Adam |
|
||||
| Loss Function | Focal Loss, L1 Loss, RegLoss |
|
||||
|
@ -420,22 +573,36 @@ CenterNet on 11.8K images(The annotation and data format must be the same as coc
|
|||
| Total time: training | 8p: 44 h |
|
||||
| Total time: evaluation | keep res: test 1h, val 0.25h; fix res: test 40 min, val 8 min|
|
||||
| Checkpoint | 2.3G (.ckpt file) |
|
||||
| Scripts | <https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/cv/centernet> |
|
||||
| Scripts | [centernet_det script](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/cv/centernet_det) |
|
||||
|
||||
### Inference Performance On Ascend
|
||||
### Inference Performance On Ascend 910
|
||||
|
||||
CenterNet on validation(5K images) and test-dev(40K images)
|
||||
|
||||
| Parameters | CenterNet |
|
||||
| Parameters | CenterNet_Hourglass |
|
||||
| -------------------------- | ----------------------------------------------------------------|
|
||||
| Resource | Ascend 910; CPU 2.60GHz, 192cores; Memory 755G; OS Euler2.8 |
|
||||
| Resource | Ascend 910; CPU 2.60GHz, 192cores; Memory, 755G |
|
||||
| uploaded Date | 3/27/2021 (month/day/year) |
|
||||
| MindSpore Version | 1.1.0 |
|
||||
| Dataset | 5K images(val), 40K images(test-dev) |
|
||||
| Dataset | COCO2017 |
|
||||
| batch_size | 1 |
|
||||
| outputs | boxes and keypoints position and scores |
|
||||
| outputs | mAP |
|
||||
| Accuracy(validation) | MAP: 41.5%, AP50: 60.4%, AP75: 44.7%, Medium: 45.7%, Large: 53.6%|
|
||||
|
||||
### Inference Performance On Ascend 310
|
||||
|
||||
CenterNet on validation(5K images)
|
||||
|
||||
| Parameters | CenterNet_Hourglass |
|
||||
| -------------------------- | ----------------------------------------------------------------|
|
||||
| Resource | Ascend 310; CentOS 3.10 |
|
||||
| uploaded Date | 8/31/2021 (month/day/year) |
|
||||
| MindSpore Version | 1.4.0 |
|
||||
| Dataset | COCO2017 |
|
||||
| batch_size | 1 |
|
||||
| outputs | mAP |
|
||||
| Accuracy(validation) | MAP: 41.0%, AP50: 60.0%, AP75: 44.0%, Medium: 43.7%, Large: 56.7%|
|
||||
|
||||
# [Description of Random Situation](#contents)
|
||||
|
||||
In run_distributed_train_ascend.sh, we set do_shuffle to True to shuffle the dataset by default.
|
||||
|
@ -444,3 +611,7 @@ In train.py, we set a random seed to make sure that each node has the same initi
|
|||
# [ModelZoo Homepage](#contents)
|
||||
|
||||
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
|
||||
|
||||
# FAQ
|
||||
|
||||
First refer to [ModelZoo FAQ](https://gitee.com/mindspore/mindspore/tree/master/model_zoo#FAQ) to find some common public questions.
|
|
@ -0,0 +1,15 @@
|
|||
cmake_minimum_required(VERSION 3.14.1)
|
||||
project(Ascend310Infer)
|
||||
add_compile_definitions(_GLIBCXX_USE_CXX11_ABI=0)
|
||||
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O2 -g -std=c++17 -Werror -Wall -fPIE -Wl,--allow-shlib-undefined")
|
||||
set(PROJECT_SRC_ROOT ${CMAKE_CURRENT_LIST_DIR}/)
|
||||
option(MINDSPORE_PATH "mindspore install path" "")
|
||||
include_directories(${MINDSPORE_PATH})
|
||||
include_directories(${MINDSPORE_PATH}/include)
|
||||
include_directories(${PROJECT_SRC_ROOT})
|
||||
find_library(MS_LIB libmindspore.so ${MINDSPORE_PATH}/lib)
|
||||
file(GLOB_RECURSE MD_LIB ${MINDSPORE_PATH}/_c_dataengine*)
|
||||
|
||||
add_executable(main src/main.cc src/utils.cc)
|
||||
target_link_libraries(main ${MS_LIB} ${MD_LIB} gflags)
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
#!/bin/bash
|
||||
# Copyright 2021 Huawei Technologies Co., Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ============================================================================
|
||||
if [ -d out ]; then
|
||||
rm -rf out
|
||||
fi
|
||||
|
||||
mkdir out
|
||||
cd out || exit
|
||||
|
||||
if [ -f "Makefile" ]; then
|
||||
make clean
|
||||
fi
|
||||
|
||||
cmake .. \
|
||||
-DMINDSPORE_PATH="`pip3.7 show mindspore-ascend | grep Location | awk '{print $2"/mindspore"}' | xargs realpath`"
|
||||
make
|
|
@ -0,0 +1,32 @@
|
|||
/**
|
||||
* Copyright 2021 Huawei Technologies Co., Ltd
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
#ifndef MINDSPORE_INFERENCE_UTILS_H_
|
||||
#define MINDSPORE_INFERENCE_UTILS_H_
|
||||
|
||||
#include <sys/stat.h>
|
||||
#include <dirent.h>
|
||||
#include <vector>
|
||||
#include <string>
|
||||
#include <memory>
|
||||
#include "include/api/types.h"
|
||||
|
||||
std::vector<std::string> GetAllFiles(std::string_view dirName);
|
||||
DIR *OpenDir(std::string_view dirName);
|
||||
std::string RealPath(std::string_view path);
|
||||
mindspore::MSTensor ReadFileToTensor(const std::string &file);
|
||||
int WriteResult(const std::string& imageFile, const std::vector<mindspore::MSTensor> &outputs);
|
||||
#endif
|
|
@ -0,0 +1,134 @@
|
|||
/**
|
||||
* Copyright 2021 Huawei Technologies Co., Ltd
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
#include <sys/time.h>
|
||||
#include <gflags/gflags.h>
|
||||
#include <dirent.h>
|
||||
#include <iostream>
|
||||
#include <string>
|
||||
#include <algorithm>
|
||||
#include <iosfwd>
|
||||
#include <vector>
|
||||
#include <fstream>
|
||||
#include <sstream>
|
||||
|
||||
#include "include/api/model.h"
|
||||
#include "include/api/context.h"
|
||||
#include "include/api/types.h"
|
||||
#include "include/api/serialization.h"
|
||||
#include "include/dataset/execute.h"
|
||||
#include "include/dataset/vision.h"
|
||||
#include "inc/utils.h"
|
||||
|
||||
using mindspore::Context;
|
||||
using mindspore::Serialization;
|
||||
using mindspore::Model;
|
||||
using mindspore::Status;
|
||||
using mindspore::MSTensor;
|
||||
using mindspore::dataset::Execute;
|
||||
using mindspore::ModelType;
|
||||
using mindspore::GraphCell;
|
||||
using mindspore::kSuccess;
|
||||
|
||||
DEFINE_string(mindir_path, "", "mindir path");
|
||||
DEFINE_string(input0_path, ".", "input0 path");
|
||||
DEFINE_int32(device_id, 0, "device id");
|
||||
|
||||
int main(int argc, char **argv) {
|
||||
gflags::ParseCommandLineFlags(&argc, &argv, true);
|
||||
if (RealPath(FLAGS_mindir_path).empty()) {
|
||||
std::cout << "Invalid mindir" << std::endl;
|
||||
return 1;
|
||||
}
|
||||
|
||||
auto context = std::make_shared<Context>();
|
||||
auto ascend310 = std::make_shared<mindspore::Ascend310DeviceInfo>();
|
||||
ascend310->SetDeviceID(FLAGS_device_id);
|
||||
ascend310->SetPrecisionMode("allow_fp32_to_fp16");
|
||||
ascend310->SetOpSelectImplMode("high_precision");
|
||||
ascend310->SetBufferOptimizeMode("off_optimize");
|
||||
context->MutableDeviceInfo().push_back(ascend310);
|
||||
mindspore::Graph graph;
|
||||
Serialization::Load(FLAGS_mindir_path, ModelType::kMindIR, &graph);
|
||||
|
||||
Model model;
|
||||
Status ret = model.Build(GraphCell(graph), context);
|
||||
if (ret != kSuccess) {
|
||||
std::cout << "ERROR: Build failed." << std::endl;
|
||||
return 1;
|
||||
}
|
||||
|
||||
std::vector<MSTensor> model_inputs = model.GetInputs();
|
||||
if (model_inputs.empty()) {
|
||||
std::cout << "Invalid model, inputs is empty." << std::endl;
|
||||
return 1;
|
||||
}
|
||||
|
||||
auto input0_files = GetAllFiles(FLAGS_input0_path);
|
||||
|
||||
if (input0_files.empty()) {
|
||||
std::cout << "ERROR: input data empty." << std::endl;
|
||||
return 1;
|
||||
}
|
||||
|
||||
std::map<double, double> costTime_map;
|
||||
size_t size = input0_files.size();
|
||||
|
||||
for (size_t i = 0; i < size; ++i) {
|
||||
struct timeval start = {0};
|
||||
struct timeval end = {0};
|
||||
double startTimeMs;
|
||||
double endTimeMs;
|
||||
std::vector<MSTensor> inputs;
|
||||
std::vector<MSTensor> outputs;
|
||||
std::cout << "Start predict input files:" << input0_files[i] << std::endl;
|
||||
|
||||
auto input0 = ReadFileToTensor(input0_files[i]);
|
||||
|
||||
inputs.emplace_back(model_inputs[0].Name(), model_inputs[0].DataType(), model_inputs[0].Shape(),
|
||||
input0.Data().get(), input0.DataSize());
|
||||
|
||||
gettimeofday(&start, nullptr);
|
||||
ret = model.Predict(inputs, &outputs);
|
||||
gettimeofday(&end, nullptr);
|
||||
if (ret != kSuccess) {
|
||||
std::cout << "Predict " << input0_files[i] << " failed." << std::endl;
|
||||
return 1;
|
||||
}
|
||||
startTimeMs = (1.0 * start.tv_sec * 1000000 + start.tv_usec) / 1000;
|
||||
endTimeMs = (1.0 * end.tv_sec * 1000000 + end.tv_usec) / 1000;
|
||||
costTime_map.insert(std::pair<double, double>(startTimeMs, endTimeMs));
|
||||
WriteResult(input0_files[i], outputs);
|
||||
}
|
||||
double average = 0.0;
|
||||
int inferCount = 0;
|
||||
|
||||
for (auto iter = costTime_map.begin(); iter != costTime_map.end(); iter++) {
|
||||
double diff = 0.0;
|
||||
diff = iter->second - iter->first;
|
||||
average += diff;
|
||||
inferCount++;
|
||||
}
|
||||
average = average / inferCount;
|
||||
std::stringstream timeCost;
|
||||
timeCost << "NN inference cost average time: " << average << " ms of infer_count " << inferCount << std::endl;
|
||||
std::cout << "NN inference cost average time: " << average << "ms of infer_count " << inferCount << std::endl;
|
||||
std::string fileName = "./time_Result" + std::string("/test_perform_static.txt");
|
||||
std::ofstream fileStream(fileName.c_str(), std::ios::trunc);
|
||||
fileStream << timeCost.str();
|
||||
fileStream.close();
|
||||
costTime_map.clear();
|
||||
return 0;
|
||||
}
|
|
@ -0,0 +1,128 @@
|
|||
/**
|
||||
* Copyright 2021 Huawei Technologies Co., Ltd
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
#include <fstream>
|
||||
#include <algorithm>
|
||||
#include <iostream>
|
||||
#include "inc/utils.h"
|
||||
|
||||
using mindspore::MSTensor;
|
||||
using mindspore::DataType;
|
||||
|
||||
std::vector<std::string> GetAllFiles(std::string_view dirName) {
|
||||
struct dirent *filename;
|
||||
DIR *dir = OpenDir(dirName);
|
||||
if (dir == nullptr) {
|
||||
return {};
|
||||
}
|
||||
std::vector<std::string> res;
|
||||
while ((filename = readdir(dir)) != nullptr) {
|
||||
std::string dName = std::string(filename->d_name);
|
||||
if (dName == "." || dName == ".." || filename->d_type != DT_REG) {
|
||||
continue;
|
||||
}
|
||||
res.emplace_back(std::string(dirName) + "/" + filename->d_name);
|
||||
}
|
||||
std::sort(res.begin(), res.end());
|
||||
for (auto &f : res) {
|
||||
std::cout << "image file: " << f << std::endl;
|
||||
}
|
||||
return res;
|
||||
}
|
||||
|
||||
int WriteResult(const std::string& imageFile, const std::vector<MSTensor> &outputs) {
|
||||
std::string homePath = "./result_Files";
|
||||
for (size_t i = 0; i < outputs.size(); ++i) {
|
||||
size_t outputSize;
|
||||
std::shared_ptr<const void> netOutput;
|
||||
netOutput = outputs[i].Data();
|
||||
outputSize = outputs[i].DataSize();
|
||||
int pos = imageFile.rfind('/');
|
||||
std::string fileName(imageFile, pos + 1);
|
||||
fileName.replace(fileName.find('.'), fileName.size() - fileName.find('.'), '_' + std::to_string(i) + ".bin");
|
||||
std::string outFileName = homePath + "/" + fileName;
|
||||
FILE * outputFile = fopen(outFileName.c_str(), "wb");
|
||||
fwrite(netOutput.get(), outputSize, sizeof(char), outputFile);
|
||||
fclose(outputFile);
|
||||
outputFile = nullptr;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
mindspore::MSTensor ReadFileToTensor(const std::string &file) {
|
||||
if (file.empty()) {
|
||||
std::cout << "Pointer file is nullptr" << std::endl;
|
||||
return mindspore::MSTensor();
|
||||
}
|
||||
|
||||
std::ifstream ifs(file);
|
||||
if (!ifs.good()) {
|
||||
std::cout << "File: " << file << " is not exist" << std::endl;
|
||||
return mindspore::MSTensor();
|
||||
}
|
||||
|
||||
if (!ifs.is_open()) {
|
||||
std::cout << "File: " << file << "open failed" << std::endl;
|
||||
return mindspore::MSTensor();
|
||||
}
|
||||
|
||||
ifs.seekg(0, std::ios::end);
|
||||
size_t size = ifs.tellg();
|
||||
mindspore::MSTensor buffer(file, mindspore::DataType::kNumberTypeUInt8, {static_cast<int64_t>(size)}, nullptr, size);
|
||||
|
||||
ifs.seekg(0, std::ios::beg);
|
||||
ifs.read(reinterpret_cast<char *>(buffer.MutableData()), size);
|
||||
ifs.close();
|
||||
|
||||
return buffer;
|
||||
}
|
||||
|
||||
DIR *OpenDir(std::string_view dirName) {
|
||||
if (dirName.empty()) {
|
||||
std::cout << " dirName is null ! " << std::endl;
|
||||
return nullptr;
|
||||
}
|
||||
std::string realPath = RealPath(dirName);
|
||||
struct stat s;
|
||||
lstat(realPath.c_str(), &s);
|
||||
if (!S_ISDIR(s.st_mode)) {
|
||||
std::cout << "dirName is not a valid directory !" << std::endl;
|
||||
return nullptr;
|
||||
}
|
||||
DIR *dir;
|
||||
dir = opendir(realPath.c_str());
|
||||
if (dir == nullptr) {
|
||||
std::cout << "Can not open dir " << dirName << std::endl;
|
||||
return nullptr;
|
||||
}
|
||||
std::cout << "Successfully opened the dir " << dirName << std::endl;
|
||||
return dir;
|
||||
}
|
||||
|
||||
std::string RealPath(std::string_view path) {
|
||||
char realPathMem[PATH_MAX] = {0};
|
||||
char *realPathRet = nullptr;
|
||||
realPathRet = realpath(path.data(), realPathMem);
|
||||
|
||||
if (realPathRet == nullptr) {
|
||||
std::cout << "File: " << path << " is not exist.";
|
||||
return "";
|
||||
}
|
||||
|
||||
std::string realPath(realPathMem);
|
||||
std::cout << path << " realpath is: " << realPath << std::endl;
|
||||
return realPath;
|
||||
}
|
|
@ -0,0 +1,275 @@
|
|||
# Builtin Configurations(DO NOT CHANGE THESE CONFIGURATIONS unless you know exactly what you are doing)
|
||||
enable_modelarts: False
|
||||
# Url for modelarts
|
||||
data_url: ""
|
||||
train_url: ""
|
||||
checkpoint_url: ""
|
||||
# Path for local
|
||||
data_path: "/cache/data"
|
||||
output_path: "/cache/train"
|
||||
load_path: "/cache/checkpoint_path"
|
||||
device_target: "Ascend"
|
||||
enable_profiling: False
|
||||
|
||||
# ==============================================================================
|
||||
# prepare *.mindrecord* data
|
||||
coco_data_dir: ""
|
||||
mindrecord_dir: "" # also used by train.py
|
||||
mindrecord_prefix: "coco_det.train.mind"
|
||||
|
||||
# train related
|
||||
save_result_dir: ""
|
||||
device_id: 0
|
||||
device_num: 1
|
||||
|
||||
distribute: 'false'
|
||||
need_profiler: "false"
|
||||
profiler_path: "./profiler"
|
||||
epoch_size: 1
|
||||
train_steps: -1
|
||||
enable_save_ckpt: "true"
|
||||
do_shuffle: "true"
|
||||
enable_data_sink: "true"
|
||||
data_sink_steps: -1
|
||||
save_checkpoint_path: ""
|
||||
load_checkpoint_path: ""
|
||||
save_checkpoint_steps: 1221
|
||||
save_checkpoint_num: 1
|
||||
|
||||
# val related
|
||||
data_dir: ""
|
||||
run_mode: "test"
|
||||
enable_eval: "true"
|
||||
visual_image: "false"
|
||||
|
||||
# export related
|
||||
export_load_ckpt: ''
|
||||
export_format: ''
|
||||
export_name: ''
|
||||
|
||||
# 310 infer
|
||||
val_data_dir: ''
|
||||
predict_dir: ''
|
||||
result_path: ''
|
||||
label_path: ''
|
||||
meta_path: ''
|
||||
save_path: ''
|
||||
|
||||
dataset_config:
|
||||
num_classes: 80
|
||||
max_objs: 128
|
||||
input_res: [512, 512]
|
||||
output_res: [128, 128]
|
||||
rand_crop: True
|
||||
shift: 0.1
|
||||
scale: 0.4
|
||||
down_ratio: 4
|
||||
aug_rot: 0.0
|
||||
rotate: 0
|
||||
flip_prop: 0.5
|
||||
color_aug: True
|
||||
coco_classes: ['background', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus',
|
||||
'train', 'truck', 'boat', 'traffic light', 'fire hydrant',
|
||||
'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog',
|
||||
'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra',
|
||||
'giraffe', 'backpack', 'umbrella', 'handbag', 'tie',
|
||||
'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',
|
||||
'kite', 'baseball bat', 'baseball glove', 'skateboard',
|
||||
'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup',
|
||||
'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
|
||||
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
|
||||
'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed',
|
||||
'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote',
|
||||
'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink',
|
||||
'refrigerator', 'book', 'clock', 'vase', 'scissors',
|
||||
'teddy bear', 'hair drier', 'toothbrush']
|
||||
mean: np.array([0.40789654, 0.44719302, 0.47026115], dtype=np.float32)
|
||||
std: np.array([0.28863828, 0.27408164, 0.27809835], dtype=np.float32)
|
||||
eig_val: np.array([0.2141788, 0.01817699, 0.00341571], dtype=np.float32)
|
||||
eig_vec: np.array([[-0.58752847, -0.69563484, 0.41340352],
|
||||
[-0.5832747, 0.00994535, -0.81221408],
|
||||
[-0.56089297, 0.71832671, 0.41158938]], dtype=np.float32)
|
||||
|
||||
net_config:
|
||||
num_stacks: 2
|
||||
down_ratio: 4
|
||||
num_classes: 80
|
||||
n: 5
|
||||
cnv_dim: 256
|
||||
modules: [2, 2, 2, 2, 2, 4]
|
||||
dims: [256, 256, 384, 384, 384, 512]
|
||||
dense_wh: False
|
||||
norm_wh: False
|
||||
cat_spec_wh: False
|
||||
reg_offset: True
|
||||
hm_weight: 1
|
||||
off_weight: 1
|
||||
wh_weight: 0.1
|
||||
mse_loss: False
|
||||
reg_loss: 'l1'
|
||||
|
||||
train_config:
|
||||
batch_size: 12
|
||||
loss_scale_value: 1024
|
||||
optimizer: 'Adam'
|
||||
lr_schedule: 'MultiDecay'
|
||||
Adam:
|
||||
weight_decay: 0.0
|
||||
decay_filter: "lambda x: x.name.endswith('.bias') or x.name.endswith('.beta') or x.name.endswith('.gamma')"
|
||||
PolyDecay:
|
||||
learning_rate: 0.00024 # 2.4e-4
|
||||
end_learning_rate: 0.0000005 # 5e-7
|
||||
power: 5.0
|
||||
eps: 0.0000001 # 1e-7
|
||||
warmup_steps: 2000
|
||||
MultiDecay:
|
||||
learning_rate: 0.00024 # 2.4e-4
|
||||
eps: 0.0000001 # 1e-7
|
||||
warmup_steps: 2000
|
||||
multi_epochs: [105, 125]
|
||||
factor: 10
|
||||
|
||||
eval_config:
|
||||
SOFT_NMS: True
|
||||
keep_res: True
|
||||
multi_scales: [1.0]
|
||||
K: 100
|
||||
score_thresh: 0.3
|
||||
valid_ids: [
|
||||
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13,
|
||||
14, 15, 16, 17, 18, 19, 20, 21, 22, 23,
|
||||
24, 25, 27, 28, 31, 32, 33, 34, 35, 36,
|
||||
37, 38, 39, 40, 41, 42, 43, 44, 46, 47,
|
||||
48, 49, 50, 51, 52, 53, 54, 55, 56, 57,
|
||||
58, 59, 60, 61, 62, 63, 64, 65, 67, 70,
|
||||
72, 73, 74, 75, 76, 77, 78, 79, 80, 81,
|
||||
82, 84, 85, 86, 87, 88, 89, 90]
|
||||
color_list: [0.000, 0.800, 1.000,
|
||||
0.850, 0.325, 0.098,
|
||||
0.929, 0.694, 0.125,
|
||||
0.494, 0.184, 0.556,
|
||||
0.466, 0.674, 0.188,
|
||||
0.301, 0.745, 0.933,
|
||||
0.635, 0.078, 0.184,
|
||||
0.300, 0.300, 0.300,
|
||||
0.600, 0.600, 0.600,
|
||||
1.000, 0.000, 0.000,
|
||||
1.000, 0.500, 0.000,
|
||||
0.749, 0.749, 0.000,
|
||||
0.000, 1.000, 0.000,
|
||||
0.000, 0.000, 1.000,
|
||||
0.667, 0.000, 1.000,
|
||||
0.333, 0.333, 0.000,
|
||||
0.333, 0.667, 0.333,
|
||||
0.333, 1.000, 0.000,
|
||||
0.667, 0.333, 0.000,
|
||||
0.667, 0.667, 0.000,
|
||||
0.667, 1.000, 0.000,
|
||||
1.000, 0.333, 0.000,
|
||||
1.000, 0.667, 0.000,
|
||||
1.000, 1.000, 0.000,
|
||||
0.000, 0.333, 0.500,
|
||||
0.000, 0.667, 0.500,
|
||||
0.000, 1.000, 0.500,
|
||||
0.333, 0.000, 0.500,
|
||||
0.333, 0.333, 0.500,
|
||||
0.333, 0.667, 0.500,
|
||||
0.333, 1.000, 0.500,
|
||||
0.667, 0.000, 0.500,
|
||||
0.667, 0.333, 0.500,
|
||||
0.667, 0.667, 0.500,
|
||||
0.667, 1.000, 0.500,
|
||||
1.000, 0.000, 0.500,
|
||||
1.000, 0.333, 0.500,
|
||||
1.000, 0.667, 0.500,
|
||||
1.000, 1.000, 0.500,
|
||||
0.000, 0.333, 1.000,
|
||||
0.000, 0.667, 1.000,
|
||||
0.000, 1.000, 1.000,
|
||||
0.333, 0.000, 1.000,
|
||||
0.333, 0.333, 1.000,
|
||||
0.333, 0.667, 1.000,
|
||||
0.333, 1.000, 1.000,
|
||||
0.667, 0.000, 1.000,
|
||||
0.667, 0.333, 1.000,
|
||||
0.667, 0.667, 1.000,
|
||||
0.667, 1.000, 1.000,
|
||||
1.000, 0.000, 1.000,
|
||||
1.000, 0.333, 1.000,
|
||||
1.000, 0.667, 1.000,
|
||||
0.167, 0.800, 0.000,
|
||||
0.333, 0.000, 0.000,
|
||||
0.500, 0.000, 0.000,
|
||||
0.667, 0.000, 0.000,
|
||||
0.833, 0.000, 0.000,
|
||||
1.000, 0.000, 0.000,
|
||||
0.000, 0.667, 0.400,
|
||||
0.000, 0.333, 0.000,
|
||||
0.000, 0.500, 0.000,
|
||||
0.000, 0.667, 0.000,
|
||||
0.000, 0.833, 0.000,
|
||||
0.000, 1.000, 0.000,
|
||||
0.000, 0.000, 0.167,
|
||||
0.000, 0.000, 0.333,
|
||||
0.000, 0.000, 0.500,
|
||||
0.000, 0.000, 0.667,
|
||||
0.000, 0.000, 0.833,
|
||||
0.000, 0.000, 1.000,
|
||||
0.000, 0.200, 0.800,
|
||||
0.143, 0.143, 0.543,
|
||||
0.286, 0.286, 0.286,
|
||||
0.429, 0.429, 0.429,
|
||||
0.571, 0.571, 0.571,
|
||||
0.714, 0.714, 0.714,
|
||||
0.857, 0.857, 0.857,
|
||||
0.000, 0.447, 0.741,
|
||||
0.50, 0.5, 0]
|
||||
|
||||
export_config:
|
||||
input_res: dataset_config.input_res
|
||||
ckpt_file: "./ckpt_file.ckpt"
|
||||
export_format: "MINDIR"
|
||||
export_name: "CenterNet_ObjectDetection"
|
||||
|
||||
---
|
||||
# Help description for each configuration
|
||||
enable_modelarts: "Whether training on modelarts, default: False"
|
||||
data_url: "Url for modelarts"
|
||||
train_url: "Url for modelarts"
|
||||
data_path: "The location of the input data."
|
||||
output_path: "The location of the output file."
|
||||
device_target: "Running platform, default is Ascend."
|
||||
enable_profiling: 'Whether enable profiling while training, default: False'
|
||||
|
||||
distribute: "Run distribute, default is false."
|
||||
need_profiler: "Profiling to parsing runtime info, default is false."
|
||||
profiler_path: "The path to save profiling data"
|
||||
epoch_size: "Epoch size, default is 1."
|
||||
train_steps: "Training Steps, default is -1, i.e. run all steps according to epoch number."
|
||||
device_id: "Device id, default is 0."
|
||||
device_num: "Use device nums, default is 1."
|
||||
enable_save_ckpt: "Enable save checkpoint, default is true."
|
||||
do_shuffle: "Enable shuffle for dataset, default is true."
|
||||
enable_data_sink: "Enable data sink, default is true."
|
||||
data_sink_steps: "Sink steps for each epoch, default is 1."
|
||||
save_checkpoint_path: "Save checkpoint path"
|
||||
load_checkpoint_path: "Load checkpoint file path"
|
||||
save_checkpoint_steps: "Save checkpoint steps, default is 1000."
|
||||
save_checkpoint_num: "Save checkpoint numbers, default is 1."
|
||||
mindrecord_dir: "Mindrecord dataset files directory"
|
||||
mindrecord_prefix: "Prefix of MindRecord dataset filename."
|
||||
visual_image: "Visulize the ground truth and predicted image"
|
||||
save_result_dir: "The path to save the predict results"
|
||||
|
||||
data_dir: "Dataset directory, the absolute image path is joined by the data_dir, and the relative path in anno_path"
|
||||
run_mode: "test or validation, default is test."
|
||||
enable_eval: "Whether evaluate accuracy after prediction"
|
||||
|
||||
---
|
||||
device_target: ['Ascend']
|
||||
distribute: ["true", "false"]
|
||||
need_profiler: ["true", "false"]
|
||||
enable_save_ckpt: ["true", "false"]
|
||||
do_shuffle: ["true", "false"]
|
||||
enable_data_sink: ["true", "false"]
|
||||
export_format: ["MINDIR"]
|
|
@ -20,7 +20,6 @@ import os
|
|||
import time
|
||||
import copy
|
||||
import json
|
||||
import argparse
|
||||
import cv2
|
||||
from pycocotools.coco import COCO
|
||||
from pycocotools.cocoeval import COCOeval
|
||||
|
@ -31,53 +30,62 @@ import mindspore.log as logger
|
|||
from src import COCOHP, CenterNetDetEval
|
||||
from src import convert_eval_format, post_process, merge_outputs
|
||||
from src import visual_image
|
||||
from src.config import dataset_config, net_config, eval_config
|
||||
from src.model_utils.config import config, dataset_config, net_config, eval_config
|
||||
from src.model_utils.moxing_adapter import moxing_wrapper
|
||||
from src.model_utils.device_adapter import get_device_id
|
||||
|
||||
_current_dir = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
parser = argparse.ArgumentParser(description='CenterNet evaluation')
|
||||
parser.add_argument('--device_target', type=str, default='Ascend', choices=['Ascend', 'CPU'],
|
||||
help='device where the code will be implemented. (Default: Ascend)')
|
||||
parser.add_argument("--device_id", type=int, default=0, help="Device id, default is 0.")
|
||||
parser.add_argument("--load_checkpoint_path", type=str, default="", help="Load checkpoint file path")
|
||||
parser.add_argument("--data_dir", type=str, default="", help="Dataset directory, "
|
||||
"the absolute image path is joined by the data_dir "
|
||||
"and the relative path in anno_path")
|
||||
parser.add_argument("--run_mode", type=str, default="val", help="test or validation, default is validation.")
|
||||
parser.add_argument("--visual_image", type=str, default="true", help="Visulize the ground truth and predicted image")
|
||||
parser.add_argument("--enable_eval", type=str, default="true", help="Whether evaluate accuracy after prediction")
|
||||
parser.add_argument("--save_result_dir", type=str, default="", help="The path to save the predict results")
|
||||
|
||||
args_opt = parser.parse_args()
|
||||
def modelarts_pre_process():
|
||||
"""modelarts pre process function."""
|
||||
try:
|
||||
from nms import soft_nms
|
||||
print('soft_nms_attributes: {}'.format(soft_nms.__dir__()))
|
||||
except ImportError:
|
||||
print('NMS not installed! trying installing...\n')
|
||||
cur_path = os.path.dirname(os.path.abspath(__file__))
|
||||
os.system('cd {}/CenterNet/src/lib/external/ && make && python setup.py install && cd - '.format(cur_path))
|
||||
try:
|
||||
from nms import soft_nms
|
||||
print('soft_nms_attributes: {}'.format(soft_nms.__dir__()))
|
||||
except ImportError:
|
||||
print('Installing failed! check if the folder "./CenterNet" exists.')
|
||||
else:
|
||||
print('Install nms successfully')
|
||||
config.data_dir = config.data_path
|
||||
config.load_checkpoint_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), config.load_checkpoint_path)
|
||||
|
||||
|
||||
@moxing_wrapper(pre_process=modelarts_pre_process)
|
||||
def predict():
|
||||
'''
|
||||
Predict function
|
||||
'''
|
||||
context.set_context(mode=context.GRAPH_MODE, device_target=args_opt.device_target)
|
||||
if args_opt.device_target == "Ascend":
|
||||
context.set_context(device_id=args_opt.device_id)
|
||||
enable_nms_fp16 = True
|
||||
else:
|
||||
context.set_context(mode=context.GRAPH_MODE, device_target=config.device_target)
|
||||
if config.device_target == "Ascend":
|
||||
context.set_context(device_id=get_device_id())
|
||||
enable_nms_fp16 = False
|
||||
else:
|
||||
enable_nms_fp16 = True
|
||||
|
||||
logger.info("Begin creating {} dataset".format(args_opt.run_mode))
|
||||
coco = COCOHP(dataset_config, run_mode=args_opt.run_mode, net_opt=net_config,
|
||||
enable_visual_image=(args_opt.visual_image == "true"), save_path=args_opt.save_result_dir,)
|
||||
coco.init(args_opt.data_dir, keep_res=eval_config.keep_res)
|
||||
logger.info("Begin creating {} dataset".format(config.run_mode))
|
||||
coco = COCOHP(dataset_config, run_mode=config.run_mode, net_opt=net_config,
|
||||
enable_visual_image=config.visual_image, save_path=config.save_result_dir,)
|
||||
coco.init(config.data_dir, keep_res=eval_config.keep_res)
|
||||
dataset = coco.create_eval_dataset()
|
||||
|
||||
net_for_eval = CenterNetDetEval(net_config, eval_config.K, enable_nms_fp16)
|
||||
net_for_eval.set_train(False)
|
||||
|
||||
param_dict = load_checkpoint(args_opt.load_checkpoint_path)
|
||||
param_dict = load_checkpoint(config.load_checkpoint_path)
|
||||
load_param_into_net(net_for_eval, param_dict)
|
||||
|
||||
# save results
|
||||
save_path = os.path.join(args_opt.save_result_dir, args_opt.run_mode)
|
||||
save_path = os.path.join(config.save_result_dir, config.run_mode)
|
||||
if not os.path.exists(save_path):
|
||||
os.makedirs(save_path)
|
||||
if args_opt.visual_image == "true":
|
||||
if config.visual_image == "true":
|
||||
save_pred_image_path = os.path.join(save_path, "pred_image")
|
||||
if not os.path.exists(save_pred_image_path):
|
||||
os.makedirs(save_pred_image_path)
|
||||
|
@ -119,10 +127,10 @@ def predict():
|
|||
pred_annos["images"].append(image_info)
|
||||
for image_anno in pred_json["annotations"]:
|
||||
pred_annos["annotations"].append(image_anno)
|
||||
if args_opt.visual_image == "true":
|
||||
if config.visual_image == "true":
|
||||
img_file = os.path.join(coco.image_path, gt_image_info[0]['file_name'])
|
||||
gt_image = cv2.imread(img_file)
|
||||
if args_opt.run_mode != "test":
|
||||
if config.run_mode != "test":
|
||||
annos = coco.coco.loadAnns(coco.anns[image_id])
|
||||
visual_image(copy.deepcopy(gt_image), annos, save_gt_image_path,
|
||||
score_threshold=eval_config.score_thresh)
|
||||
|
@ -130,15 +138,15 @@ def predict():
|
|||
visual_image(gt_image, anno, save_pred_image_path, score_threshold=eval_config.score_thresh)
|
||||
|
||||
# save results
|
||||
save_path = os.path.join(args_opt.save_result_dir, args_opt.run_mode)
|
||||
save_path = os.path.join(config.save_result_dir, config.run_mode)
|
||||
if not os.path.exists(save_path):
|
||||
os.makedirs(save_path)
|
||||
pred_anno_file = os.path.join(save_path, '{}_pred_result.json').format(args_opt.run_mode)
|
||||
pred_anno_file = os.path.join(save_path, '{}_pred_result.json').format(config.run_mode)
|
||||
json.dump(pred_annos, open(pred_anno_file, 'w'))
|
||||
pred_res_file = os.path.join(save_path, '{}_pred_eval.json').format(args_opt.run_mode)
|
||||
pred_res_file = os.path.join(save_path, '{}_pred_eval.json').format(config.run_mode)
|
||||
json.dump(pred_annos["annotations"], open(pred_res_file, 'w'))
|
||||
|
||||
if args_opt.run_mode != "test" and args_opt.enable_eval:
|
||||
if config.run_mode != "test" and config.enable_eval:
|
||||
run_eval(coco.annot_path, pred_res_file)
|
||||
|
||||
|
||||
|
@ -151,5 +159,6 @@ def run_eval(gt_anno, pred_anno):
|
|||
coco_eval.accumulate()
|
||||
coco_eval.summarize()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
predict()
|
||||
|
|
|
@ -16,21 +16,27 @@
|
|||
Export CenterNet mindir model.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import os
|
||||
import numpy as np
|
||||
import mindspore
|
||||
from mindspore import context, Tensor
|
||||
from mindspore.train.serialization import load_checkpoint, load_param_into_net, export
|
||||
|
||||
from src import CenterNetDetEval
|
||||
from src.config import net_config, eval_config, export_config
|
||||
from src.model_utils.config import config, net_config, eval_config, export_config
|
||||
from src.model_utils.moxing_adapter import moxing_wrapper
|
||||
|
||||
parser = argparse.ArgumentParser(description='centernet export')
|
||||
parser.add_argument("--device_id", type=int, default=0, help="Device id")
|
||||
args = parser.parse_args()
|
||||
|
||||
context.set_context(mode=context.GRAPH_MODE, device_target="Ascend", device_id=args.device_id)
|
||||
def modelarts_pre_process():
|
||||
'''modelarts pre process function.'''
|
||||
export_config.ckpt_file = os.path.join(os.path.dirname(os.path.abspath(__file__)), export_config.ckpt_file)
|
||||
export_config.export_name = os.path.join(config.output_path, export_config.export_name)
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
@moxing_wrapper(pre_process=modelarts_pre_process)
|
||||
def run_export():
|
||||
'''export function'''
|
||||
context.set_context(mode=context.GRAPH_MODE, device_target="Ascend", device_id=config.device_id)
|
||||
net = CenterNetDetEval(net_config, eval_config.K)
|
||||
net.set_train(False)
|
||||
|
||||
|
@ -38,7 +44,10 @@ if __name__ == '__main__':
|
|||
load_param_into_net(net, param_dict)
|
||||
net.set_train(False)
|
||||
|
||||
input_shape = [1, 3, export_config.input_res[0], export_config.input_res[1]]
|
||||
input_data = Tensor(np.random.uniform(-1.0, 1.0, size=input_shape).astype(np.float32))
|
||||
input_data = Tensor(np.zeros([1, 3, export_config.input_res[0], export_config.input_res[1]]), mindspore.float32)
|
||||
|
||||
export(net, input_data, file_name=export_config.export_name, file_format=export_config.export_format)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
run_export()
|
||||
|
|
|
@ -0,0 +1,64 @@
|
|||
# Copyright 2021 Huawei Technologies Co., Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ============================================================================
|
||||
"""post process for 310 inference"""
|
||||
import os
|
||||
import json
|
||||
import numpy as np
|
||||
import pycocotools.coco as coco
|
||||
from pycocotools.cocoeval import COCOeval
|
||||
from src.model_utils.config import config, dataset_config, eval_config
|
||||
from src import convert_eval_format, post_process, merge_outputs
|
||||
|
||||
|
||||
def cal_acc(result_path, label_path, meta_path, save_path):
|
||||
"""calculate inference accuracy"""
|
||||
name_list = np.load(os.path.join(meta_path, "name_list.npy"), allow_pickle=True)
|
||||
meta_list = np.load(os.path.join(meta_path, "meta_list.npy"), allow_pickle=True)
|
||||
|
||||
label_infor = coco.COCO(label_path)
|
||||
pred_annos = {"images": [], "annotations": []}
|
||||
for num, image_id in enumerate(name_list):
|
||||
meta = meta_list[num]
|
||||
pre_image = np.fromfile(os.path.join(result_path) + "/eval2017_image_" + str(image_id) + "_0.bin",
|
||||
dtype=np.float32).reshape((1, 100, 6))
|
||||
detections = []
|
||||
for scale in eval_config.multi_scales:
|
||||
dets = post_process(pre_image, meta, scale, dataset_config.num_classes)
|
||||
detections.append(dets)
|
||||
detections = merge_outputs(detections, dataset_config.num_classes, eval_config.SOFT_NMS)
|
||||
pred_json = convert_eval_format(detections, image_id, eval_config.valid_ids)
|
||||
label_infor.loadImgs([image_id])
|
||||
for image_info in pred_json["images"]:
|
||||
pred_annos["images"].append(image_info)
|
||||
for image_anno in pred_json["annotations"]:
|
||||
pred_annos["annotations"].append(image_anno)
|
||||
|
||||
if not os.path.exists(save_path):
|
||||
os.makedirs(save_path)
|
||||
pred_anno_file = os.path.join(save_path, '{}_pred_result.json').format(config.run_mode)
|
||||
json.dump(pred_annos, open(pred_anno_file, 'w'))
|
||||
pred_res_file = os.path.join(save_path, '{}_pred_eval.json').format(config.run_mode)
|
||||
json.dump(pred_annos["annotations"], open(pred_res_file, 'w'))
|
||||
|
||||
coco_anno = coco.COCO(label_path)
|
||||
coco_dets = coco_anno.loadRes(pred_res_file)
|
||||
coco_eval = COCOeval(coco_anno, coco_dets, "bbox")
|
||||
coco_eval.evaluate()
|
||||
coco_eval.accumulate()
|
||||
coco_eval.summarize()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
cal_acc(config.result_path, config.label_path, config.meta_path, config.save_path)
|
|
@ -0,0 +1,56 @@
|
|||
# Copyright 2021 Huawei Technologies Co., Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ============================================================================
|
||||
"""pre process for 310 inference"""
|
||||
import os
|
||||
import numpy as np
|
||||
from src.model_utils.config import config, dataset_config, eval_config, net_config
|
||||
from src.dataset import COCOHP
|
||||
|
||||
|
||||
def preprocess(dataset_path, preprocess_path):
|
||||
"""preprocess input images"""
|
||||
meta_path = os.path.join(preprocess_path, "meta/meta")
|
||||
result_path = os.path.join(preprocess_path, "data")
|
||||
if not os.path.exists(meta_path):
|
||||
os.makedirs(os.path.join(preprocess_path, "meta/meta"))
|
||||
if not os.path.exists(result_path):
|
||||
os.makedirs(os.path.join(preprocess_path, "data"))
|
||||
coco = COCOHP(dataset_config, run_mode="val", net_opt=net_config)
|
||||
coco.init(dataset_path, keep_res=False)
|
||||
dataset = coco.create_eval_dataset()
|
||||
name_list = []
|
||||
meta_list = []
|
||||
i = 0
|
||||
for data in dataset.create_dict_iterator(num_epochs=1):
|
||||
img_id = data['image_id'].asnumpy().reshape((-1))[0]
|
||||
image = data['image'].asnumpy()
|
||||
for scale in eval_config.multi_scales:
|
||||
image_preprocess, meta = coco.pre_process_for_test(image, img_id, scale)
|
||||
evl_file_name = "eval2017_image" + "_" + str(img_id) + ".bin"
|
||||
evl_file_path = result_path + "/" + evl_file_name
|
||||
image_preprocess.tofile(evl_file_path)
|
||||
meta_file_path = os.path.join(preprocess_path + "/meta/meta", str(img_id) + ".txt")
|
||||
with open(meta_file_path, 'w+') as f:
|
||||
f.write(str(meta))
|
||||
name_list.append(img_id)
|
||||
meta_list.append(meta)
|
||||
i += 1
|
||||
print(f"preprocess: no.[{i}], img_name:{img_id}")
|
||||
np.save(os.path.join(preprocess_path + "/meta", "name_list.npy"), np.array(name_list))
|
||||
np.save(os.path.join(preprocess_path + "/meta", "meta_list.npy"), np.array(meta_list))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
preprocess(config.val_data_dir, config.predict_dir)
|
|
@ -1,3 +1,4 @@
|
|||
opencv-python
|
||||
numpy
|
||||
pycocotools
|
||||
Cython
|
6
model_zoo/research/cv/centernet_det/scripts/ascend_distributed_launcher/hyper_parameter_config.ini
Executable file → Normal file
6
model_zoo/research/cv/centernet_det/scripts/ascend_distributed_launcher/hyper_parameter_config.ini
Executable file → Normal file
|
@ -6,8 +6,8 @@ do_shuffle=true
|
|||
enable_data_sink=true
|
||||
data_sink_steps=-1
|
||||
save_checkpoint_path=./
|
||||
save_checkpoint_steps=6105
|
||||
save_checkpoint_num=20
|
||||
save_checkpoint_steps=1221
|
||||
save_checkpoint_num=1
|
||||
mindrecord_prefix="coco_det.train.mind"
|
||||
need_profiler=false
|
||||
profiler_path=./profiler
|
||||
profiler_path=./profiler
|
||||
|
|
|
@ -0,0 +1,145 @@
|
|||
#!/bin/bash
|
||||
# Copyright 2021 Huawei Technologies Co., Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ============================================================================
|
||||
|
||||
if [[ $# -lt 3 || $# -gt 4 ]]; then
|
||||
echo "Usage: bash run_infer_310.sh [MINDIR_PATH] [DATASET_PATH] [NEED_PREPROCESS] [DEVICE_ID]
|
||||
NEED_PREPROCESS means weather need preprocess or not, it's value is 'y' or 'n'.
|
||||
DEVICE_ID is optional, it can be set by environment variable device_id, otherwise the value is zero"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
get_real_path(){
|
||||
if [ "${1:0:1}" == "/" ]; then
|
||||
echo "$1"
|
||||
else
|
||||
echo "$(realpath -m $PWD/$1)"
|
||||
fi
|
||||
}
|
||||
model=$(get_real_path $1)
|
||||
dataset_path=$(get_real_path $2)
|
||||
|
||||
if [ "$3" == "y" ] || [ "$3" == "n" ];then
|
||||
need_preprocess=$3
|
||||
else
|
||||
echo "weather need preprocess or not, it's value must be in [y, n]"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
device_id=0
|
||||
if [ $# == 4 ]; then
|
||||
device_id=$4
|
||||
fi
|
||||
|
||||
echo "mindir name: "$model
|
||||
echo "dataset path: "$dataset_path
|
||||
echo "need preprocess: "$need_preprocess
|
||||
echo "device id: "$device_id
|
||||
|
||||
export ASCEND_HOME=/usr/local/Ascend/
|
||||
if [ -d ${ASCEND_HOME}/ascend-toolkit ]; then
|
||||
export PATH=$ASCEND_HOME/fwkacllib/bin:$ASCEND_HOME/fwkacllib/ccec_compiler/bin:$ASCEND_HOME/ascend-toolkit/latest/fwkacllib/ccec_compiler/bin:$ASCEND_HOME/ascend-toolkit/latest/atc/bin:$PATH
|
||||
export LD_LIBRARY_PATH=$ASCEND_HOME/fwkacllib/lib64:/usr/local/lib:$ASCEND_HOME/ascend-toolkit/latest/atc/lib64:$ASCEND_HOME/ascend-toolkit/latest/fwkacllib/lib64:$ASCEND_HOME/driver/lib64:$ASCEND_HOME/add-ons:$LD_LIBRARY_PATH
|
||||
export TBE_IMPL_PATH=$ASCEND_HOME/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe
|
||||
export PYTHONPATH=$ASCEND_HOME/fwkacllib/python/site-packages:${TBE_IMPL_PATH}:$ASCEND_HOME/ascend-toolkit/latest/fwkacllib/python/site-packages:$PYTHONPATH
|
||||
export ASCEND_OPP_PATH=$ASCEND_HOME/ascend-toolkit/latest/opp
|
||||
else
|
||||
export PATH=$ASCEND_HOME/fwkacllib/bin:$ASCEND_HOME/fwkacllib/ccec_compiler/bin:$ASCEND_HOME/atc/ccec_compiler/bin:$ASCEND_HOME/atc/bin:$PATH
|
||||
export LD_LIBRARY_PATH=$ASCEND_HOME/fwkacllib/lib64:/usr/local/lib:$ASCEND_HOME/atc/lib64:$ASCEND_HOME/acllib/lib64:$ASCEND_HOME/driver/lib64:$ASCEND_HOME/add-ons:$LD_LIBRARY_PATH
|
||||
export PYTHONPATH=$ASCEND_HOME/fwkacllib/python/site-packages:$ASCEND_HOME/atc/python/site-packages:$PYTHONPATH
|
||||
export ASCEND_OPP_PATH=$ASCEND_HOME/opp
|
||||
fi
|
||||
|
||||
function preprocess_data()
|
||||
{
|
||||
if [ -d preprocess ]; then
|
||||
rm -rf ./preprocess
|
||||
fi
|
||||
mkdir preprocess
|
||||
python3.7 ../preprocess.py --val_data_dir=$dataset_path --predict_dir=./preprocess/ >& preprocess.log
|
||||
}
|
||||
|
||||
function compile_app()
|
||||
{
|
||||
cd ../ascend310_infer || exit
|
||||
bash build.sh &> build.log
|
||||
}
|
||||
|
||||
function infer()
|
||||
{
|
||||
cd - || exit
|
||||
if [ -d result_Files ]; then
|
||||
rm -rf ./result_Files
|
||||
fi
|
||||
if [ -d time_Result ]; then
|
||||
rm -rf ./time_Result
|
||||
fi
|
||||
mkdir result_Files
|
||||
mkdir time_Result
|
||||
|
||||
../ascend310_infer/out/main --mindir_path=$model --input0_path=./preprocess/data --device_id=$device_id &> infer.log
|
||||
|
||||
}
|
||||
|
||||
# install nms module from third party
|
||||
if python -c "import nms" > /dev/null 2>&1
|
||||
then
|
||||
echo "NMS module already exits, no need reinstall."
|
||||
else
|
||||
if [ -f './CenterNet' ]
|
||||
then
|
||||
echo "NMS module was not found, but has been downloaded"
|
||||
else
|
||||
echo "NMS module was not found, install it now..."
|
||||
git clone https://github.com/xingyizhou/CenterNet.git
|
||||
fi
|
||||
cd CenterNet/src/lib/external/ || exit
|
||||
make
|
||||
python setup.py install
|
||||
cd - || exit
|
||||
rm -rf CenterNet
|
||||
fi
|
||||
|
||||
function cal_ap()
|
||||
{
|
||||
if [ -d acc ]; then
|
||||
rm -rf ./acc
|
||||
fi
|
||||
mkdir acc
|
||||
python3.7 ../postprocess.py --result_path=./result_Files --label_path=$dataset_path/annotations/instances_val2017.json --meta_path=./preprocess/meta --save_path=./acc &> acc.log
|
||||
}
|
||||
|
||||
if [ $need_preprocess == "y" ]; then
|
||||
preprocess_data
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "preprocess dataset failed"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
compile_app
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "compile app code failed"
|
||||
exit 1
|
||||
fi
|
||||
infer
|
||||
if [ $? -ne 0 ]; then
|
||||
echo " execute inference failed"
|
||||
exit 1
|
||||
fi
|
||||
cal_ap
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "calculate accuracy failed"
|
||||
exit 1
|
||||
fi
|
16
model_zoo/research/cv/centernet_det/scripts/run_standalone_eval_ascend.sh
Executable file → Normal file
16
model_zoo/research/cv/centernet_det/scripts/run_standalone_eval_ascend.sh
Executable file → Normal file
|
@ -29,18 +29,24 @@ PROJECT_DIR=$(cd "$(dirname "$0")" || exit; pwd)
|
|||
CUR_DIR=`pwd`
|
||||
export GLOG_log_dir=${CUR_DIR}/ms_log
|
||||
export GLOG_logtostderr=0
|
||||
export DEVICE_ID=$DEVICE_ID
|
||||
|
||||
# install nms module from third party
|
||||
if python -c "import nms" > /dev/null 2>&1
|
||||
then
|
||||
echo "NMS module already exits, no need reinstall."
|
||||
else
|
||||
echo "NMS module was not found, install it now..."
|
||||
git clone https://github.com/xingyizhou/CenterNet.git
|
||||
cd CenterNet/src/lib/external/
|
||||
if [ -f './CenterNet' ]
|
||||
then
|
||||
echo "NMS module was not found, but has been downloaded"
|
||||
else
|
||||
echo "NMS module was not found, install it now..."
|
||||
git clone https://github.com/xingyizhou/CenterNet.git
|
||||
fi
|
||||
cd CenterNet/src/lib/external/ || exit
|
||||
make
|
||||
python setup.py install
|
||||
cd -
|
||||
cd - || exit
|
||||
rm -rf CenterNet
|
||||
fi
|
||||
|
||||
|
@ -50,6 +56,6 @@ python ${PROJECT_DIR}/../eval.py \
|
|||
--load_checkpoint_path=$LOAD_CHECKPOINT_PATH \
|
||||
--data_dir=$DATA_DIR \
|
||||
--run_mode=$RUN_MODE \
|
||||
--visual_image=false \
|
||||
--visual_image=true \
|
||||
--enable_eval=true \
|
||||
--save_result_dir=./ > eval_log.txt 2>&1 &
|
||||
|
|
4
model_zoo/research/cv/centernet_det/scripts/run_standalone_train_ascend.sh
Executable file → Normal file
4
model_zoo/research/cv/centernet_det/scripts/run_standalone_train_ascend.sh
Executable file → Normal file
|
@ -35,6 +35,7 @@ PROJECT_DIR=$(cd "$(dirname "$0")" || exit; pwd)
|
|||
CUR_DIR=`pwd`
|
||||
export GLOG_log_dir=${CUR_DIR}/ms_log
|
||||
export GLOG_logtostderr=0
|
||||
export DEVICE_ID=$DEVICE_ID
|
||||
|
||||
python ${PROJECT_DIR}/../train.py \
|
||||
--distribute=false \
|
||||
|
@ -47,8 +48,9 @@ python ${PROJECT_DIR}/../train.py \
|
|||
--data_sink_steps=-1 \
|
||||
--epoch_size=130 \
|
||||
--load_checkpoint_path=$LOAD_CHECKPOINT_PATH \
|
||||
--save_checkpoint_steps=6105 \
|
||||
--save_checkpoint_steps=9772 \
|
||||
--save_checkpoint_num=1 \
|
||||
--mindrecord_dir=$MINDRECORD_DIR \
|
||||
--mindrecord_prefix="coco_det.train.mind" \
|
||||
--visual_image=false \
|
||||
--save_result_dir="" > training_log.txt 2>&1 &
|
|
@ -23,44 +23,33 @@ from mindspore import context
|
|||
from mindspore import dtype as mstype
|
||||
from mindspore.common.tensor import Tensor
|
||||
from mindspore.context import ParallelMode
|
||||
from mindspore.common.initializer import Constant
|
||||
from mindspore.communication.management import get_group_size
|
||||
from mindspore.nn.wrap.grad_reducer import DistributedGradReducer
|
||||
from src.utils import Sigmoid, GradScale
|
||||
from src.utils import FocalLoss, RegLoss
|
||||
from src.decode import DetectionDecode
|
||||
from src.config import dataset_config as data_cfg
|
||||
from src.hourglass import Convolution, Residual, Kp_module
|
||||
|
||||
|
||||
from .model_utils.config import dataset_config as data_cfg
|
||||
BN_MOMENTUM = 0.9
|
||||
|
||||
|
||||
def _generate_feature(cin, cout, kernel_size, head_name, head, num_stacks, with_bn=True):
|
||||
def _generate_feature(cin, cout, kernel_size, head, num_stacks, with_bn=True):
|
||||
"""
|
||||
Generate feature extraction function of each target head
|
||||
Generate hourglass network feature extraction function of each target head
|
||||
"""
|
||||
module = None
|
||||
if 'hm' in head_name:
|
||||
module = nn.CellList([
|
||||
nn.SequentialCell(
|
||||
Convolution(cin, cout, kernel_size, with_bn=with_bn),
|
||||
nn.Conv2d(cout, head, kernel_size=1, has_bias=True, bias_init=Constant(-2.19), pad_mode='pad')
|
||||
) for _ in range(num_stacks)
|
||||
])
|
||||
else:
|
||||
module = nn.CellList([
|
||||
nn.SequentialCell(
|
||||
Convolution(cin, cout, kernel_size, with_bn=with_bn),
|
||||
nn.Conv2d(cout, head, kernel_size=1, has_bias=True, pad_mode='pad')
|
||||
) for _ in range(num_stacks)
|
||||
])
|
||||
module = nn.CellList([
|
||||
nn.SequentialCell(
|
||||
Convolution(cin, cout, kernel_size, with_bn=with_bn),
|
||||
nn.Conv2d(cout, head, kernel_size=1, has_bias=True, pad_mode='pad')
|
||||
) for _ in range(num_stacks)
|
||||
])
|
||||
return module
|
||||
|
||||
|
||||
class GatherDetectionFeatureCell(nn.Cell):
|
||||
"""
|
||||
Gather features of object detection.
|
||||
Gather hourglass features of object detection.
|
||||
|
||||
Args:
|
||||
net_config: The config info of CenterNet network.
|
||||
|
@ -71,13 +60,15 @@ class GatherDetectionFeatureCell(nn.Cell):
|
|||
|
||||
def __init__(self, net_config):
|
||||
super(GatherDetectionFeatureCell, self).__init__()
|
||||
self.heads = net_config.heads
|
||||
self.nstack = net_config.num_stacks
|
||||
self.n = net_config.n
|
||||
self.cnv_dim = net_config.cnv_dim
|
||||
self.dims = net_config.dims
|
||||
self.modules = net_config.modules
|
||||
curr_dim = self.dims[0]
|
||||
self.heads = {'hm': data_cfg.num_classes, 'wh': 2}
|
||||
if net_config.reg_offset:
|
||||
self.heads.update({'reg': 2})
|
||||
|
||||
self.pre = nn.SequentialCell(
|
||||
Convolution(3, 128, 7, stride=2),
|
||||
|
@ -114,12 +105,13 @@ class GatherDetectionFeatureCell(nn.Cell):
|
|||
|
||||
self.relu = nn.ReLU()
|
||||
|
||||
self.hm_fn = _generate_feature(cin=self.cnv_dim, cout=curr_dim, kernel_size=3, head_name='hm',
|
||||
head=self.heads['hm'], num_stacks=self.nstack, with_bn=False)
|
||||
self.wh_fn = _generate_feature(cin=self.cnv_dim, cout=curr_dim, kernel_size=3, head_name='wh',
|
||||
head=self.heads['wh'], num_stacks=self.nstack, with_bn=False)
|
||||
self.reg_fn = _generate_feature(cin=self.cnv_dim, cout=curr_dim, kernel_size=3, head_name='reg',
|
||||
head=self.heads['reg'], num_stacks=self.nstack, with_bn=False)
|
||||
self.hm_fn = _generate_feature(cin=self.cnv_dim, cout=curr_dim, kernel_size=3, head=self.heads['hm'],
|
||||
num_stacks=self.nstack, with_bn=False)
|
||||
self.wh_fn = _generate_feature(cin=self.cnv_dim, cout=curr_dim, kernel_size=3, head=self.heads['wh'],
|
||||
num_stacks=self.nstack, with_bn=False)
|
||||
if net_config.reg_offset:
|
||||
self.reg_fn = _generate_feature(cin=self.cnv_dim, cout=curr_dim, kernel_size=3, head=self.heads['reg'],
|
||||
num_stacks=self.nstack, with_bn=False)
|
||||
|
||||
def construct(self, image):
|
||||
"""Defines the computation performed."""
|
||||
|
@ -134,13 +126,9 @@ class GatherDetectionFeatureCell(nn.Cell):
|
|||
inter = self.inters[ind](inter)
|
||||
|
||||
out = {}
|
||||
for head in self.heads.keys():
|
||||
if head == 'hm':
|
||||
out[head] = self.hm_fn[ind](cnv)
|
||||
if head == 'wh':
|
||||
out[head] = self.wh_fn[ind](cnv)
|
||||
if head == 'reg':
|
||||
out[head] = self.reg_fn[ind](cnv)
|
||||
out['hm'] = self.hm_fn[ind](cnv)
|
||||
out['wh'] = self.wh_fn[ind](cnv)
|
||||
out['reg'] = self.reg_fn[ind](cnv)
|
||||
outs += (out,)
|
||||
return outs
|
||||
|
||||
|
@ -158,20 +146,18 @@ class CenterNetLossCell(nn.Cell):
|
|||
def __init__(self, net_config):
|
||||
super(CenterNetLossCell, self).__init__()
|
||||
self.network = GatherDetectionFeatureCell(net_config)
|
||||
self.net_config = net_config
|
||||
self.num_stacks = net_config.num_stacks
|
||||
self.reduce_sum = ops.ReduceSum()
|
||||
self.Sigmoid = Sigmoid()
|
||||
self.FocalLoss = FocalLoss()
|
||||
self.crit = nn.MSELoss() if net_config.mse_loss else self.FocalLoss
|
||||
self.crit_reg = RegLoss(net_config.reg_loss)
|
||||
self.crit_wh = RegLoss(net_config.reg_loss)
|
||||
self.num_stacks = net_config.num_stacks
|
||||
self.wh_weight = net_config.wh_weight
|
||||
self.hm_weight = net_config.hm_weight
|
||||
self.off_weight = net_config.off_weight
|
||||
self.reg_offset = net_config.reg_offset
|
||||
self.not_enable_mse_loss = not net_config.mse_loss
|
||||
self.Print = ops.Print()
|
||||
|
||||
def construct(self, image, hm, reg_mask, ind, wh, reg):
|
||||
"""Defines the computation performed."""
|
||||
|
@ -250,8 +236,9 @@ class CenterNetWithoutLossScaleCell(nn.Cell):
|
|||
weights = self.weights
|
||||
loss = self.network(image, hm, reg_mask, ind, wh, reg)
|
||||
grads = self.grad(self.network, weights)(image, hm, reg_mask, ind, wh, reg)
|
||||
self.optimizer(grads)
|
||||
return loss
|
||||
succ = self.optimizer(grads)
|
||||
ret = loss
|
||||
return ops.depend(ret, succ)
|
||||
|
||||
|
||||
class CenterNetWithLossScaleCell(nn.Cell):
|
||||
|
@ -319,9 +306,12 @@ class CenterNetWithLossScaleCell(nn.Cell):
|
|||
else:
|
||||
cond = self.less_equal(self.base, flag_sum)
|
||||
overflow = cond
|
||||
if not overflow:
|
||||
self.optimizer(grads)
|
||||
return (loss, cond, scaling_sens)
|
||||
if overflow:
|
||||
succ = False
|
||||
else:
|
||||
succ = self.optimizer(grads)
|
||||
ret = (loss, cond, scaling_sens)
|
||||
return ops.depend(ret, succ)
|
||||
|
||||
|
||||
class CenterNetDetEval(nn.Cell):
|
||||
|
@ -331,17 +321,15 @@ class CenterNetDetEval(nn.Cell):
|
|||
Args:
|
||||
net_config: The config info of CenterNet network.
|
||||
K(number): Max number of output objects. Default: 100.
|
||||
enable_nms_fp16(bool): Use float16 data for max_pool, adaption for CPU. Default: True.
|
||||
enable_nms_fp16(bool): Use float16 data for max_pool, adaption for CPU. Default: False.
|
||||
|
||||
Returns:
|
||||
Tensor, detection of images(bboxes, score, keypoints and category id of each objects)
|
||||
"""
|
||||
def __init__(self, net_config, K=100, enable_nms_fp16=True):
|
||||
def __init__(self, net_config, K=100, enable_nms_fp16=False):
|
||||
super(CenterNetDetEval, self).__init__()
|
||||
self.network = GatherDetectionFeatureCell(net_config)
|
||||
self.decode = DetectionDecode(net_config, K, enable_nms_fp16)
|
||||
self.shape = ops.Shape()
|
||||
self.reshape = ops.Reshape()
|
||||
|
||||
def construct(self, image):
|
||||
"""Calculate prediction scores"""
|
||||
|
|
|
@ -1,225 +0,0 @@
|
|||
# Copyright 2021 Huawei Technologies Co., Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ============================================================================
|
||||
"""
|
||||
network config setting, will be used in dataset.py, train.py, eval.py
|
||||
"""
|
||||
|
||||
import numpy as np
|
||||
from easydict import EasyDict as edict
|
||||
|
||||
|
||||
dataset_config = edict({
|
||||
"num_classes": 80,
|
||||
'max_objs': 128,
|
||||
'input_res': [512, 512],
|
||||
'output_res': [128, 128],
|
||||
'rand_crop': True,
|
||||
'shift': 0.1,
|
||||
'scale': 0.4,
|
||||
'down_ratio': 4,
|
||||
'aug_rot': 0.0,
|
||||
'rotate': 0,
|
||||
'flip_prop': 0.5,
|
||||
'color_aug': True,
|
||||
'coco_classes': ('background', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus',
|
||||
'train', 'truck', 'boat', 'traffic light', 'fire hydrant',
|
||||
'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog',
|
||||
'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra',
|
||||
'giraffe', 'backpack', 'umbrella', 'handbag', 'tie',
|
||||
'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',
|
||||
'kite', 'baseball bat', 'baseball glove', 'skateboard',
|
||||
'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup',
|
||||
'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
|
||||
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
|
||||
'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed',
|
||||
'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote',
|
||||
'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink',
|
||||
'refrigerator', 'book', 'clock', 'vase', 'scissors',
|
||||
'teddy bear', 'hair drier', 'toothbrush'),
|
||||
'coco_class_name2id': {
|
||||
'person': 1, 'bicycle': 2, 'car': 3, 'motorcycle': 4, 'airplane': 5,
|
||||
'bus': 6, 'train': 7, 'truck': 8, 'boat': 9, 'traffic light': 10, 'fire hydrant': 11,
|
||||
'stop sign': 13, 'parking meter': 14, 'bench': 15, 'bird': 16, 'cat': 17, 'dog': 18, 'horse': 19,
|
||||
'sheep': 20, 'cow': 21, 'elephant': 22, 'bear': 23, 'zebra': 24, 'giraffe': 25, 'backpack': 27,
|
||||
'umbrella': 28, 'handbag': 31, 'tie': 32, 'suitcase': 33, 'frisbee': 34, 'skis': 35,
|
||||
'snowboard': 36, 'sports ball': 37, 'kite': 38, 'baseball bat': 39, 'baseball glove': 40,
|
||||
'skateboard': 41, 'surfboard': 42, 'tennis racket': 43, 'bottle': 44, 'wine glass': 46,
|
||||
'cup': 47, 'fork': 48, 'knife': 49, 'spoon': 50, 'bowl': 51, 'banana': 52, 'apple': 53, 'sandwich': 54,
|
||||
'orange': 55, 'broccoli': 56, 'carrot': 57, 'hot dog': 58, 'pizza': 59, 'donut': 60, 'cake': 61,
|
||||
'chair': 62, 'couch': 63, 'potted plant': 64, 'bed': 65, 'dining table': 67, 'toilet': 70, 'tv': 72,
|
||||
'laptop': 73, 'mouse': 74, 'remote': 75, 'keyboard': 76, 'cell phone': 77, 'microwave': 78,
|
||||
'oven': 79, 'toaster': 80, 'sink': 81, 'refrigerator': 82, 'book': 84, 'clock': 85, 'vase': 86,
|
||||
'scissors': 87, 'teddy bear': 88, 'hair drier': 89, 'toothbrush': 90},
|
||||
'mean': np.array([0.40789654, 0.44719302, 0.47026115], dtype=np.float32),
|
||||
'std': np.array([0.28863828, 0.27408164, 0.27809835], dtype=np.float32),
|
||||
'eig_val': np.array([0.2141788, 0.01817699, 0.00341571], dtype=np.float32),
|
||||
'eig_vec': np.array([[-0.58752847, -0.69563484, 0.41340352],
|
||||
[-0.5832747, 0.00994535, -0.81221408],
|
||||
[-0.56089297, 0.71832671, 0.41158938]], dtype=np.float32),
|
||||
})
|
||||
|
||||
|
||||
net_config = edict({
|
||||
'down_ratio': 4,
|
||||
'last_level': 6,
|
||||
'num_stacks': 2,
|
||||
'n': 5,
|
||||
'heads': {'hm': 80, 'wh': 2, 'reg': 2},
|
||||
'cnv_dim': 256,
|
||||
'modules': [2, 2, 2, 2, 2, 4],
|
||||
'dims': [256, 256, 384, 384, 384, 512],
|
||||
'dense_wh': False,
|
||||
'norm_wh': False,
|
||||
'cat_spec_wh': False,
|
||||
'reg_offset': True,
|
||||
'hm_weight': 1,
|
||||
'off_weight': 1,
|
||||
'wh_weight': 0.1,
|
||||
'mse_loss': False,
|
||||
'reg_loss': 'l1',
|
||||
})
|
||||
|
||||
|
||||
train_config = edict({
|
||||
'batch_size': 12,
|
||||
'loss_scale_value': 1024,
|
||||
'optimizer': 'Adam',
|
||||
'lr_schedule': 'MultiDecay',
|
||||
'Adam': edict({
|
||||
'weight_decay': 0.0,
|
||||
'decay_filter': lambda x: x.name.endswith('.bias') or x.name.endswith('.beta') or x.name.endswith('.gamma'),
|
||||
}),
|
||||
'PolyDecay': edict({
|
||||
'learning_rate': 2.4e-4,
|
||||
'end_learning_rate': 2.4e-7,
|
||||
'power': 5.0,
|
||||
'eps': 1e-7,
|
||||
'warmup_steps': 2000,
|
||||
}),
|
||||
'MultiDecay': edict({
|
||||
'learning_rate': 2.4e-4,
|
||||
'eps': 1e-7,
|
||||
'warmup_steps': 2000,
|
||||
'multi_epochs': [105, 125],
|
||||
'factor': 10,
|
||||
})
|
||||
})
|
||||
|
||||
|
||||
eval_config = edict({
|
||||
'SOFT_NMS': True,
|
||||
'keep_res': True,
|
||||
'multi_scales': [1.0],
|
||||
'pad': 127,
|
||||
'K': 100,
|
||||
'score_thresh': 0.3,
|
||||
'valid_ids': [
|
||||
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13,
|
||||
14, 15, 16, 17, 18, 19, 20, 21, 22, 23,
|
||||
24, 25, 27, 28, 31, 32, 33, 34, 35, 36,
|
||||
37, 38, 39, 40, 41, 42, 43, 44, 46, 47,
|
||||
48, 49, 50, 51, 52, 53, 54, 55, 56, 57,
|
||||
58, 59, 60, 61, 62, 63, 64, 65, 67, 70,
|
||||
72, 73, 74, 75, 76, 77, 78, 79, 80, 81,
|
||||
82, 84, 85, 86, 87, 88, 89, 90],
|
||||
'color_list': [
|
||||
0.000, 0.800, 1.000,
|
||||
0.850, 0.325, 0.098,
|
||||
0.929, 0.694, 0.125,
|
||||
0.494, 0.184, 0.556,
|
||||
0.466, 0.674, 0.188,
|
||||
0.301, 0.745, 0.933,
|
||||
0.635, 0.078, 0.184,
|
||||
0.300, 0.300, 0.300,
|
||||
0.600, 0.600, 0.600,
|
||||
1.000, 0.000, 0.000,
|
||||
1.000, 0.500, 0.000,
|
||||
0.749, 0.749, 0.000,
|
||||
0.000, 1.000, 0.000,
|
||||
0.000, 0.000, 1.000,
|
||||
0.667, 0.000, 1.000,
|
||||
0.333, 0.333, 0.000,
|
||||
0.333, 0.667, 0.333,
|
||||
0.333, 1.000, 0.000,
|
||||
0.667, 0.333, 0.000,
|
||||
0.667, 0.667, 0.000,
|
||||
0.667, 1.000, 0.000,
|
||||
1.000, 0.333, 0.000,
|
||||
1.000, 0.667, 0.000,
|
||||
1.000, 1.000, 0.000,
|
||||
0.000, 0.333, 0.500,
|
||||
0.000, 0.667, 0.500,
|
||||
0.000, 1.000, 0.500,
|
||||
0.333, 0.000, 0.500,
|
||||
0.333, 0.333, 0.500,
|
||||
0.333, 0.667, 0.500,
|
||||
0.333, 1.000, 0.500,
|
||||
0.667, 0.000, 0.500,
|
||||
0.667, 0.333, 0.500,
|
||||
0.667, 0.667, 0.500,
|
||||
0.667, 1.000, 0.500,
|
||||
1.000, 0.000, 0.500,
|
||||
1.000, 0.333, 0.500,
|
||||
1.000, 0.667, 0.500,
|
||||
1.000, 1.000, 0.500,
|
||||
0.000, 0.333, 1.000,
|
||||
0.000, 0.667, 1.000,
|
||||
0.000, 1.000, 1.000,
|
||||
0.333, 0.000, 1.000,
|
||||
0.333, 0.333, 1.000,
|
||||
0.333, 0.667, 1.000,
|
||||
0.333, 1.000, 1.000,
|
||||
0.667, 0.000, 1.000,
|
||||
0.667, 0.333, 1.000,
|
||||
0.667, 0.667, 1.000,
|
||||
0.667, 1.000, 1.000,
|
||||
1.000, 0.000, 1.000,
|
||||
1.000, 0.333, 1.000,
|
||||
1.000, 0.667, 1.000,
|
||||
0.167, 0.800, 0.000,
|
||||
0.333, 0.000, 0.000,
|
||||
0.500, 0.000, 0.000,
|
||||
0.667, 0.000, 0.000,
|
||||
0.833, 0.000, 0.000,
|
||||
1.000, 0.000, 0.000,
|
||||
0.000, 0.667, 0.400,
|
||||
0.000, 0.333, 0.000,
|
||||
0.000, 0.500, 0.000,
|
||||
0.000, 0.667, 0.000,
|
||||
0.000, 0.833, 0.000,
|
||||
0.000, 1.000, 0.000,
|
||||
0.000, 0.000, 0.167,
|
||||
0.000, 0.000, 0.333,
|
||||
0.000, 0.000, 0.500,
|
||||
0.000, 0.000, 0.667,
|
||||
0.000, 0.000, 0.833,
|
||||
0.000, 0.000, 1.000,
|
||||
0.000, 0.200, 0.800,
|
||||
0.143, 0.143, 0.543,
|
||||
0.286, 0.286, 0.286,
|
||||
0.429, 0.429, 0.429,
|
||||
0.571, 0.571, 0.571,
|
||||
0.714, 0.714, 0.714,
|
||||
0.857, 0.857, 0.857,
|
||||
0.000, 0.447, 0.741,
|
||||
0.50, 0.5, 0],
|
||||
})
|
||||
|
||||
export_config = edict({
|
||||
'input_res': dataset_config.input_res,
|
||||
'ckpt_file': "./ckpt_file.ckpt",
|
||||
'export_format': "MINDIR",
|
||||
'export_name': "CenterNet_ObjectDetection",
|
||||
})
|
|
@ -17,21 +17,35 @@ Data operations, will be used in train.py
|
|||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import math
|
||||
import argparse
|
||||
import cv2
|
||||
import numpy as np
|
||||
import pycocotools.coco as coco
|
||||
import mindspore.dataset as ds
|
||||
from mindspore import log as logger
|
||||
from mindspore.mindrecord import FileWriter
|
||||
from src.image import color_aug, get_affine_transform, affine_transform
|
||||
from src.image import gaussian_radius, draw_umich_gaussian, draw_msra_gaussian, draw_dense_reg
|
||||
from src.visual import visual_image
|
||||
|
||||
try:
|
||||
from src.model_utils.config import config, dataset_config
|
||||
from src.model_utils.moxing_adapter import moxing_wrapper
|
||||
from src.image import color_aug, get_affine_transform, affine_transform
|
||||
from src.image import gaussian_radius, draw_umich_gaussian, draw_msra_gaussian, draw_dense_reg
|
||||
from src.visual import visual_image
|
||||
except ImportError as import_error:
|
||||
print('Import Error: {}, trying append path/centernet_det/src/../'.format(import_error))
|
||||
sys.path.append(os.path.dirname(os.path.dirname(os.path.realpath(__file__))))
|
||||
from src.model_utils.config import config, dataset_config
|
||||
from src.model_utils.moxing_adapter import moxing_wrapper
|
||||
from src.image import color_aug, get_affine_transform, affine_transform
|
||||
from src.image import gaussian_radius, draw_umich_gaussian, draw_msra_gaussian, draw_dense_reg
|
||||
from src.visual import visual_image
|
||||
|
||||
|
||||
_current_dir = os.path.dirname(os.path.realpath(__file__))
|
||||
cv2.setNumThreads(0)
|
||||
|
||||
|
||||
class COCOHP(ds.Dataset):
|
||||
"""
|
||||
Encapsulation class of COCO datast.
|
||||
|
@ -386,16 +400,19 @@ class COCOHP(ds.Dataset):
|
|||
return data_set
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
# Convert coco2017 dataset to mindrecord to improve performance on host
|
||||
from src.config import dataset_config
|
||||
def modelarts_pre_process():
|
||||
"""modelarts pre process function."""
|
||||
config.coco_data_dir = config.data_path
|
||||
config.mindrecord_dir = config.output_path
|
||||
|
||||
parser = argparse.ArgumentParser(description='CenterNet MindRecord dataset')
|
||||
parser.add_argument("--coco_data_dir", type=str, default="", help="Coco dataset directory.")
|
||||
parser.add_argument("--mindrecord_dir", type=str, default="", help="MindRecord dataset dir.")
|
||||
parser.add_argument("--mindrecord_prefix", type=str, default="coco_det.train.mind",
|
||||
help="Prefix of MindRecord dataset filename.")
|
||||
args_opt = parser.parse_args()
|
||||
|
||||
@moxing_wrapper(pre_process=modelarts_pre_process)
|
||||
def coco2mindrecord():
|
||||
"""Convert coco2017 dataset to mindrecord"""
|
||||
dsc = COCOHP(dataset_config, run_mode="train")
|
||||
dsc.init(args_opt.coco_data_dir)
|
||||
dsc.transfer_coco_to_mindrecord(args_opt.mindrecord_dir, args_opt.mindrecord_prefix, shard_num=8)
|
||||
dsc.init(config.coco_data_dir)
|
||||
dsc.transfer_coco_to_mindrecord(config.mindrecord_dir, config.mindrecord_prefix, shard_num=8)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
coco2mindrecord()
|
||||
|
|
|
@ -16,29 +16,33 @@
|
|||
Decode from heads for evaluation
|
||||
"""
|
||||
|
||||
import mindspore.ops as ops
|
||||
import mindspore as ms
|
||||
import mindspore.nn as nn
|
||||
import mindspore.ops as ops
|
||||
from mindspore.ops import operations as P
|
||||
from mindspore.common import dtype as mstype
|
||||
from .utils import GatherFeature, TransposeGatherFeature
|
||||
|
||||
|
||||
class NMS(nn.Cell):
|
||||
"""
|
||||
Non-maximum suppression
|
||||
|
||||
Args:
|
||||
kernel(int): Maxpooling kernel size. Default: 3.
|
||||
enable_nms_fp16(bool): Use float16 data for max_pool, adaption for CPU. Default: True.
|
||||
enable_nms_fp16(bool): Use float16 data for max_pool, adaption for CPU. Default: False.
|
||||
|
||||
Returns:
|
||||
Tensor, heatmap after non-maximum suppression.
|
||||
"""
|
||||
def __init__(self, kernel=3, enable_nms_fp16=True):
|
||||
def __init__(self, kernel=3, enable_nms_fp16=False):
|
||||
super(NMS, self).__init__()
|
||||
self.pad = (kernel - 1) // 2
|
||||
self.cast = ops.Cast()
|
||||
self.dtype = ops.DType()
|
||||
self.equal = ops.Equal()
|
||||
self.max_pool = nn.MaxPool2d(kernel, stride=1, pad_mode="same")
|
||||
self.Abs = P.Abs()
|
||||
self.max_pool_ = nn.MaxPool2d(kernel, stride=1, pad_mode="same")
|
||||
self.max_pool = P.MaxPoolWithArgmax(kernel_size=kernel, strides=1, pad_mode='same')
|
||||
self.enable_fp16 = enable_nms_fp16
|
||||
|
||||
def construct(self, heat):
|
||||
|
@ -46,16 +50,23 @@ class NMS(nn.Cell):
|
|||
dtype = self.dtype(heat)
|
||||
if self.enable_fp16:
|
||||
heat = self.cast(heat, mstype.float16)
|
||||
heat_max = self.max_pool(heat)
|
||||
heat_max = self.max_pool_(heat)
|
||||
keep = self.equal(heat, heat_max)
|
||||
keep = self.cast(keep, dtype)
|
||||
heat = self.cast(heat, dtype)
|
||||
else:
|
||||
heat_max = self.max_pool(heat)
|
||||
keep = self.equal(heat, heat_max)
|
||||
heat_max, _ = self.max_pool(heat)
|
||||
error = self.cast((heat - heat_max), mstype.float32)
|
||||
abs_error = self.Abs(error)
|
||||
abs_out = self.Abs(heat)
|
||||
error = abs_error / (abs_out + 1e-12)
|
||||
keep = P.Select()(P.LessEqual()(error, 1e-3),
|
||||
P.Fill()(ms.float32, P.Shape()(error), 1.0),
|
||||
P.Fill()(ms.float32, P.Shape()(error), 0.0))
|
||||
heat = heat * keep
|
||||
return heat
|
||||
|
||||
|
||||
class GatherTopK(nn.Cell):
|
||||
"""
|
||||
Gather topk features through all channels
|
||||
|
@ -73,7 +84,8 @@ class GatherTopK(nn.Cell):
|
|||
self.cast = ops.Cast()
|
||||
self.dtype = ops.DType()
|
||||
self.gather_feat = GatherFeature()
|
||||
self.mod = ops.Mod()
|
||||
# The ops.Mod() operator will produce errors on the Ascend 310
|
||||
self.mod = P.FloorMod()
|
||||
self.div = ops.Div()
|
||||
|
||||
def construct(self, scores, K=40):
|
||||
|
@ -95,6 +107,7 @@ class GatherTopK(nn.Cell):
|
|||
topk_xs = self.cast(self.reshape(topk_xs, (b, K)), self.dtype(scores))
|
||||
return topk_score, topk_inds, topk_clses, topk_ys, topk_xs
|
||||
|
||||
|
||||
class DetectionDecode(nn.Cell):
|
||||
"""
|
||||
Decode from heads to gather multi-objects info.
|
||||
|
@ -107,7 +120,7 @@ class DetectionDecode(nn.Cell):
|
|||
Returns:
|
||||
Tensor, multi-objects detections.
|
||||
"""
|
||||
def __init__(self, net_config, K=100, enable_nms_fp16=True):
|
||||
def __init__(self, net_config, K=100, enable_nms_fp16=False):
|
||||
super(DetectionDecode, self).__init__()
|
||||
self.K = K
|
||||
self.nms = NMS(enable_nms_fp16=enable_nms_fp16)
|
||||
|
|
|
@ -1,152 +0,0 @@
|
|||
# Copyright 2021 Huawei Technologies Co., Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ============================================================================
|
||||
"""
|
||||
hccl configuration file generation
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
import socket
|
||||
from argparse import ArgumentParser
|
||||
from typing import Dict, Any
|
||||
|
||||
|
||||
def parse_args():
|
||||
"""
|
||||
parse args .
|
||||
|
||||
Args:
|
||||
|
||||
Returns:
|
||||
args.
|
||||
|
||||
Examples:
|
||||
>>> parse_args()
|
||||
"""
|
||||
parser = ArgumentParser(description="mindspore distributed training launch "
|
||||
"helper utility that will generate hccl"
|
||||
" config file")
|
||||
parser.add_argument("--device_num", type=str, default="[0,8)",
|
||||
help="The number of the Ascend accelerators used. please note that the Ascend accelerators"
|
||||
"used must be continuous, such [0,4) means to use four chips "
|
||||
"0,1,2,3; [0,1) means to use chip 0; The first four chips are"
|
||||
"a group, and the last four chips are a group. In addition to"
|
||||
"the [0,8) chips are allowed, other cross-group such as [3,6)"
|
||||
"are prohibited.")
|
||||
parser.add_argument("--visible_devices", type=str, default="0,1,2,3,4,5,6,7",
|
||||
help="will use the visible devices sequentially")
|
||||
parser.add_argument("--server_ip", type=str, default="",
|
||||
help="server ip")
|
||||
args = parser.parse_args()
|
||||
return args
|
||||
|
||||
|
||||
def get_host_ip():
|
||||
"""
|
||||
get host ip
|
||||
"""
|
||||
ip = None
|
||||
|
||||
try:
|
||||
hostname = socket.gethostname()
|
||||
ip = socket.gethostbyname(hostname)
|
||||
except EOFError:
|
||||
pass
|
||||
|
||||
return ip
|
||||
|
||||
|
||||
def main():
|
||||
print("start", __file__)
|
||||
args = parse_args()
|
||||
|
||||
# visible_devices
|
||||
visible_devices = args.visible_devices.split(',')
|
||||
print('visible_devices:{}'.format(visible_devices))
|
||||
|
||||
# server_id
|
||||
ip = get_host_ip()
|
||||
if args.server_ip:
|
||||
server_id = args.server_ip
|
||||
elif ip:
|
||||
server_id = ip
|
||||
else:
|
||||
raise ValueError("please input server ip!")
|
||||
print('server_id:{}'.format(server_id))
|
||||
|
||||
# device_num
|
||||
first_num = int(args.device_num[1])
|
||||
last_num = int(args.device_num[3])
|
||||
if first_num < 0 or last_num > 8:
|
||||
raise ValueError("device num {} must be in range [0,8] !".format(args.device_num))
|
||||
if first_num > last_num:
|
||||
raise ValueError("First num {} of device num {} must less than last num {} !".format(first_num, args.device_num,
|
||||
last_num))
|
||||
if first_num < 4:
|
||||
if last_num > 4:
|
||||
if first_num == 0 and last_num == 8:
|
||||
pass
|
||||
else:
|
||||
raise ValueError("device num {} must be in the same group of [0,4] or [4,8] !".format(args.device_num))
|
||||
|
||||
device_num_list = list(range(first_num, last_num))
|
||||
print("device_num_list:", device_num_list)
|
||||
|
||||
assert len(visible_devices) >= len(device_num_list)
|
||||
|
||||
# construct hccn_table
|
||||
device_ips: Dict[Any, Any] = {}
|
||||
with open('/etc/hccn.conf', 'r') as fin:
|
||||
for hccn_item in fin.readlines():
|
||||
if hccn_item.strip().startswith('address_'):
|
||||
device_id, device_ip = hccn_item.split('=')
|
||||
device_id = device_id.split('_')[1]
|
||||
device_ips[device_id] = device_ip.strip()
|
||||
|
||||
hccn_table = {'version': '1.0',
|
||||
'server_count': '1',
|
||||
'server_list': []}
|
||||
device_list = []
|
||||
rank_id = 0
|
||||
for instance_id in device_num_list:
|
||||
device_id = visible_devices[instance_id]
|
||||
device_ip = device_ips[device_id]
|
||||
device = {'device_id': device_id,
|
||||
'device_ip': device_ip,
|
||||
'rank_id': str(rank_id)}
|
||||
print('rank_id:{}, device_id:{}, device_ip:{}'.format(rank_id, device_id, device_ip))
|
||||
rank_id += 1
|
||||
device_list.append(device)
|
||||
hccn_table['server_list'].append({
|
||||
'server_id': server_id,
|
||||
'device': device_list,
|
||||
'host_nic_ip': 'reserve'
|
||||
})
|
||||
hccn_table['status'] = 'completed'
|
||||
|
||||
# save hccn_table to file
|
||||
table_path = os.getcwd()
|
||||
table_fn = os.path.join(table_path,
|
||||
'hccl_{}p_{}_{}.json'.format(len(device_num_list), "".join(map(str, device_num_list)),
|
||||
server_id))
|
||||
with open(table_fn, 'w') as table_fp:
|
||||
json.dump(hccn_table, table_fp, indent=4)
|
||||
sys.stdout.flush()
|
||||
print("Completed: hccl file was save in :", table_fn)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
|
@ -0,0 +1,156 @@
|
|||
# Copyright 2021 Huawei Technologies Co., Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ============================================================================
|
||||
|
||||
"""Parse arguments"""
|
||||
|
||||
import os
|
||||
import ast
|
||||
import argparse
|
||||
from pprint import pprint, pformat
|
||||
import yaml
|
||||
import numpy as np
|
||||
|
||||
|
||||
class Config:
|
||||
"""
|
||||
Configuration namespace. Convert dictionary to members.
|
||||
"""
|
||||
def __init__(self, cfg_dict):
|
||||
for k, v in cfg_dict.items():
|
||||
if isinstance(v, str) and (v[:9] == 'np.array(' and v[-17:] == 'dtype=np.float32)'):
|
||||
v = np.array(ast.literal_eval(v[9:v.rfind(']') + 1]), dtype=np.float32)
|
||||
if isinstance(v, (list, tuple)):
|
||||
setattr(self, k, [Config(x) if isinstance(x, dict) else x for x in v])
|
||||
else:
|
||||
setattr(self, k, Config(v) if isinstance(v, dict) else v)
|
||||
|
||||
def __str__(self):
|
||||
return pformat(self.__dict__)
|
||||
|
||||
def __repr__(self):
|
||||
return self.__str__()
|
||||
|
||||
|
||||
def parse_cli_to_yaml(parser, cfg, helper=None, choices=None, cfg_path="default_config.yaml"):
|
||||
"""
|
||||
Parse command line arguments to the configuration according to the default yaml.
|
||||
|
||||
Args:
|
||||
parser: Parent parser.
|
||||
cfg: Base configuration.
|
||||
helper: Helper description.
|
||||
cfg_path: Path to the default yaml config.
|
||||
"""
|
||||
parser = argparse.ArgumentParser(description="[REPLACE THIS at config.py]",
|
||||
parents=[parser])
|
||||
helper = {} if helper is None else helper
|
||||
choices = {} if choices is None else choices
|
||||
for item in cfg:
|
||||
if not isinstance(cfg[item], list) and not isinstance(cfg[item], dict):
|
||||
help_description = helper[item] if item in helper else "Please reference to {}".format(cfg_path)
|
||||
choice = choices[item] if item in choices else None
|
||||
if isinstance(cfg[item], bool):
|
||||
parser.add_argument("--" + item, type=ast.literal_eval, default=cfg[item], choices=choice,
|
||||
help=help_description)
|
||||
else:
|
||||
parser.add_argument("--" + item, type=type(cfg[item]), default=cfg[item], choices=choice,
|
||||
help=help_description)
|
||||
args = parser.parse_args()
|
||||
return args
|
||||
|
||||
|
||||
def parse_yaml(yaml_path):
|
||||
"""
|
||||
Parse the yaml config file.
|
||||
|
||||
Args:
|
||||
yaml_path: Path to the yaml config.
|
||||
"""
|
||||
with open(yaml_path, 'r') as fin:
|
||||
try:
|
||||
cfgs = yaml.load_all(fin.read(), Loader=yaml.FullLoader)
|
||||
cfgs = [x for x in cfgs]
|
||||
if len(cfgs) == 1:
|
||||
cfg_helper = {}
|
||||
cfg = cfgs[0]
|
||||
cfg_choices = {}
|
||||
elif len(cfgs) == 2:
|
||||
cfg, cfg_helper = cfgs
|
||||
cfg_choices = {}
|
||||
elif len(cfgs) == 3:
|
||||
cfg, cfg_helper, cfg_choices = cfgs
|
||||
else:
|
||||
raise ValueError("At most 3 docs (config, description for help, choices) are supported in config yaml")
|
||||
print(cfg_helper)
|
||||
except:
|
||||
raise ValueError("Failed to parse yaml")
|
||||
return cfg, cfg_helper, cfg_choices
|
||||
|
||||
|
||||
def merge(args, cfg):
|
||||
"""
|
||||
Merge the base config from yaml file and command line arguments.
|
||||
|
||||
Args:
|
||||
args: Command line arguments.
|
||||
cfg: Base configuration.
|
||||
"""
|
||||
args_var = vars(args)
|
||||
for item in args_var:
|
||||
cfg[item] = args_var[item]
|
||||
return cfg
|
||||
|
||||
|
||||
def extra_operations(cfg):
|
||||
"""
|
||||
Do extra work on Config object.
|
||||
|
||||
Args:
|
||||
cfg: Object after instantiation of class 'Config'.
|
||||
"""
|
||||
cfg.train_config.Adam.decay_filter = lambda x: x.name.endswith('.bias') or x.name.endswith('.beta') or x.name.endswith('.gamma')
|
||||
cfg.export_config.input_res = cfg.dataset_config.input_res
|
||||
if cfg.export_load_ckpt:
|
||||
cfg.export_config.ckpt_file = cfg.export_load_ckpt
|
||||
if cfg.export_name:
|
||||
cfg.export_config.export_name = cfg.export_name
|
||||
if cfg.export_format:
|
||||
cfg.export_config.export_format = cfg.export_format
|
||||
|
||||
|
||||
def get_config():
|
||||
"""
|
||||
Get Config according to the yaml file and cli arguments.
|
||||
"""
|
||||
parser = argparse.ArgumentParser(description="default name", add_help=False)
|
||||
current_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
parser.add_argument("--config_path", type=str, default=os.path.join(current_dir, "../../default_config.yaml"),
|
||||
help="Config file path")
|
||||
path_args, _ = parser.parse_known_args()
|
||||
default, helper, choices = parse_yaml(path_args.config_path)
|
||||
pprint(default)
|
||||
args = parse_cli_to_yaml(parser=parser, cfg=default, helper=helper, choices=choices, cfg_path=path_args.config_path)
|
||||
final_config = merge(args, default)
|
||||
config_obj = Config(final_config)
|
||||
extra_operations(config_obj)
|
||||
return config_obj
|
||||
|
||||
|
||||
config = get_config()
|
||||
dataset_config = config.dataset_config
|
||||
net_config = config.net_config
|
||||
train_config = config.train_config
|
||||
eval_config = config.eval_config
|
||||
export_config = config.export_config
|
|
@ -0,0 +1,27 @@
|
|||
# Copyright 2021 Huawei Technologies Co., Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ============================================================================
|
||||
|
||||
"""Device adapter for ModelArts"""
|
||||
|
||||
from src.model_utils.config import config
|
||||
|
||||
if config.enable_modelarts:
|
||||
from src.model_utils.moxing_adapter import get_device_id, get_device_num, get_rank_id, get_job_id
|
||||
else:
|
||||
from src.model_utils.local_adapter import get_device_id, get_device_num, get_rank_id, get_job_id
|
||||
|
||||
__all__ = [
|
||||
"get_device_id", "get_device_num", "get_rank_id", "get_job_id"
|
||||
]
|
|
@ -0,0 +1,36 @@
|
|||
# Copyright 2021 Huawei Technologies Co., Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ============================================================================
|
||||
|
||||
"""Local adapter"""
|
||||
|
||||
import os
|
||||
|
||||
def get_device_id():
|
||||
device_id = os.getenv('DEVICE_ID', '0')
|
||||
return int(device_id)
|
||||
|
||||
|
||||
def get_device_num():
|
||||
device_num = os.getenv('RANK_SIZE', '1')
|
||||
return int(device_num)
|
||||
|
||||
|
||||
def get_rank_id():
|
||||
global_rank_id = os.getenv('RANK_ID', '0')
|
||||
return int(global_rank_id)
|
||||
|
||||
|
||||
def get_job_id():
|
||||
return "Local Job"
|
|
@ -0,0 +1,123 @@
|
|||
# Copyright 2021 Huawei Technologies Co., Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ============================================================================
|
||||
|
||||
"""Moxing adapter for ModelArts"""
|
||||
|
||||
import os
|
||||
import functools
|
||||
from mindspore import context
|
||||
from mindspore.profiler import Profiler
|
||||
from src.model_utils.config import config
|
||||
|
||||
_global_sync_count = 0
|
||||
|
||||
def get_device_id():
|
||||
device_id = os.getenv('DEVICE_ID', '0')
|
||||
return int(device_id)
|
||||
|
||||
|
||||
def get_device_num():
|
||||
device_num = os.getenv('RANK_SIZE', '1')
|
||||
return int(device_num)
|
||||
|
||||
|
||||
def get_rank_id():
|
||||
global_rank_id = os.getenv('RANK_ID', '0')
|
||||
return int(global_rank_id)
|
||||
|
||||
|
||||
def get_job_id():
|
||||
job_id = os.getenv('JOB_ID')
|
||||
job_id = job_id if job_id != "" else "default"
|
||||
return job_id
|
||||
|
||||
def sync_data(from_path, to_path):
|
||||
"""
|
||||
Download data from remote obs to local directory if the first url is remote url and the second one is local path
|
||||
Upload data from local directory to remote obs in contrast.
|
||||
"""
|
||||
import moxing as mox
|
||||
import time
|
||||
global _global_sync_count
|
||||
sync_lock = "/tmp/copy_sync.lock" + str(_global_sync_count)
|
||||
_global_sync_count += 1
|
||||
|
||||
# Each server contains 8 devices as most.
|
||||
if get_device_id() % min(get_device_num(), 8) == 0 and not os.path.exists(sync_lock):
|
||||
print("from path: ", from_path)
|
||||
print("to path: ", to_path)
|
||||
mox.file.copy_parallel(from_path, to_path)
|
||||
print("===finish data synchronization===")
|
||||
try:
|
||||
os.mknod(sync_lock)
|
||||
# print("os.mknod({}) success".format(sync_lock))
|
||||
except IOError:
|
||||
pass
|
||||
print("===save flag===")
|
||||
|
||||
while True:
|
||||
if os.path.exists(sync_lock):
|
||||
break
|
||||
time.sleep(1)
|
||||
|
||||
print("Finish sync data from {} to {}.".format(from_path, to_path))
|
||||
|
||||
|
||||
def moxing_wrapper(pre_process=None, post_process=None):
|
||||
"""
|
||||
Moxing wrapper to download dataset and upload outputs.
|
||||
"""
|
||||
def wrapper(run_func):
|
||||
@functools.wraps(run_func)
|
||||
def wrapped_func(*args, **kwargs):
|
||||
# Download data from data_url
|
||||
if config.enable_modelarts:
|
||||
if config.data_url:
|
||||
sync_data(config.data_url, config.data_path)
|
||||
print("Dataset downloaded: ", os.listdir(config.data_path))
|
||||
if config.checkpoint_url:
|
||||
sync_data(config.checkpoint_url, config.load_path)
|
||||
print("Preload downloaded: ", os.listdir(config.load_path))
|
||||
if config.train_url:
|
||||
sync_data(config.train_url, config.output_path)
|
||||
print("Workspace downloaded: ", os.listdir(config.output_path))
|
||||
|
||||
context.set_context(save_graphs_path=os.path.join(config.output_path, str(get_rank_id())))
|
||||
config.device_num = get_device_num()
|
||||
config.device_id = get_device_id()
|
||||
if not os.path.exists(config.output_path):
|
||||
os.makedirs(config.output_path)
|
||||
|
||||
if pre_process:
|
||||
pre_process()
|
||||
|
||||
if config.enable_profiling:
|
||||
profiler = Profiler()
|
||||
|
||||
run_func(*args, **kwargs)
|
||||
|
||||
if config.enable_profiling:
|
||||
profiler.analyse()
|
||||
|
||||
# Upload data to train_url
|
||||
if config.enable_modelarts:
|
||||
if post_process:
|
||||
post_process()
|
||||
|
||||
if config.train_url:
|
||||
print("Start to copy output directory")
|
||||
sync_data(config.output_path, config.train_url)
|
||||
return wrapped_func
|
||||
return wrapper
|
|
@ -20,13 +20,6 @@ from .image import get_affine_transform, affine_transform, transform_preds
|
|||
from .visual import coco_box_to_bbox
|
||||
|
||||
|
||||
try:
|
||||
from nms import soft_nms
|
||||
except ImportError:
|
||||
print('NMS not installed! Do \n cd $CenterNet_ROOT/scripts/ \n'
|
||||
'and see run_standalone_eval.sh for more details to install it\n')
|
||||
|
||||
|
||||
def post_process(dets, meta, scale, num_classes):
|
||||
"""rescale detection to original scale"""
|
||||
c, s, h, w = meta['c'], meta['s'], meta['out_height'], meta['out_width']
|
||||
|
@ -59,7 +52,12 @@ def merge_outputs(detections, num_classes, SOFT_NMS=True):
|
|||
results[j] = np.concatenate(
|
||||
[detection[j] for detection in detections], axis=0).astype(np.float32)
|
||||
if SOFT_NMS:
|
||||
soft_nms(results[j], Nt=0.5, threshold=0.01, method=2)
|
||||
try:
|
||||
from nms import soft_nms
|
||||
except ImportError:
|
||||
print('NMS not installed! Do \n cd $CenterNet_ROOT/scripts/ \n'
|
||||
'and see run_standalone_eval.sh for more details to install it\n')
|
||||
soft_nms(results[j], Nt=0.5, threshold=0.001, method=2)
|
||||
|
||||
scores = np.hstack(
|
||||
[results[j][:, 4] for j in range(1, num_classes + 1)])
|
||||
|
|
|
@ -22,52 +22,10 @@ import numpy as np
|
|||
import mindspore.nn as nn
|
||||
import mindspore.ops as ops
|
||||
from mindspore import dtype as mstype
|
||||
from mindspore.common.initializer import initializer
|
||||
from mindspore.common.parameter import Parameter
|
||||
from mindspore.common.tensor import Tensor
|
||||
from mindspore.nn.learning_rate_schedule import LearningRateSchedule, PolynomialDecayLR, WarmUpLR
|
||||
from mindspore.train.callback import Callback
|
||||
|
||||
clip_grad = ops.MultitypeFuncGraph("clip_grad")
|
||||
|
||||
|
||||
@clip_grad.register("Number", "Tensor")
|
||||
def _clip_grad(clip_value, grad):
|
||||
"""
|
||||
Clip gradients.
|
||||
|
||||
Inputs:
|
||||
clip_value (float): Specifies how much to clip.
|
||||
grad (tuple[Tensor]): Gradients.
|
||||
|
||||
Outputs:
|
||||
tuple[Tensor], clipped gradients.
|
||||
"""
|
||||
dt = ops.dtype(grad)
|
||||
new_grad = nn.ClipByNorm()(grad, ops.cast(ops.tuple_to_array((clip_value,)), dt))
|
||||
return new_grad
|
||||
|
||||
|
||||
class ClipByNorm(nn.Cell):
|
||||
"""
|
||||
Clip grads by gradient norm
|
||||
|
||||
Args:
|
||||
clip_norm(float): The target norm of graident clip. Default: 1.0
|
||||
|
||||
Returns:
|
||||
Tuple of Tensors, gradients after clip.
|
||||
"""
|
||||
def __init__(self, clip_norm=1.0):
|
||||
super(ClipByNorm, self).__init__()
|
||||
self.hyper_map = ops.HyperMap()
|
||||
self.clip_norm = clip_norm
|
||||
|
||||
def construct(self, grads):
|
||||
grads = self.hyper_map(ops.partial(clip_grad, self.clip_norm), grads)
|
||||
return grads
|
||||
|
||||
|
||||
reciprocal = ops.Reciprocal()
|
||||
grad_scale = ops.MultitypeFuncGraph("grad_scale")
|
||||
|
||||
|
@ -95,37 +53,17 @@ class GradScale(nn.Cell):
|
|||
return grads
|
||||
|
||||
|
||||
class ClipByValue(nn.Cell):
|
||||
"""
|
||||
Clip tensor by value
|
||||
|
||||
Args: None
|
||||
|
||||
Returns:
|
||||
Tensor, output after clip.
|
||||
"""
|
||||
def __init__(self):
|
||||
super(ClipByValue, self).__init__()
|
||||
self.min = ops.Minimum()
|
||||
self.max = ops.Maximum()
|
||||
|
||||
def construct(self, x, clip_value_min, clip_value_max):
|
||||
x_min = self.min(x, clip_value_max)
|
||||
x_max = self.max(x_min, clip_value_min)
|
||||
return x_max
|
||||
|
||||
|
||||
class GatherFeature(nn.Cell):
|
||||
"""
|
||||
Gather feature at specified position
|
||||
|
||||
Args:
|
||||
enable_cpu_gather (bool): Use cpu operator GatherD to gather feature or not, adaption for CPU. Default: True.
|
||||
enable_cpu_gather (bool): Use cpu operator GatherD to gather feature or not, adaption for CPU. Default: False.
|
||||
|
||||
Returns:
|
||||
Tensor, feature at spectified position
|
||||
"""
|
||||
def __init__(self, enable_cpu_gather=True):
|
||||
def __init__(self, enable_cpu_gather=False):
|
||||
super(GatherFeature, self).__init__()
|
||||
self.tile = ops.Tile()
|
||||
self.shape = ops.Shape()
|
||||
|
@ -257,171 +195,6 @@ class FocalLoss(nn.Cell):
|
|||
return loss
|
||||
|
||||
|
||||
class GHMCLoss(nn.Cell):
|
||||
"""
|
||||
Warpper for gradient harmonizing loss for classification.
|
||||
|
||||
Args:
|
||||
bins(int): Number of bins. Default: 10.
|
||||
momentum(float): Momentum for moving gradient density. Default: 0.0.
|
||||
|
||||
Returns:
|
||||
Tensor, GHM loss for classification.
|
||||
"""
|
||||
def __init__(self, bins=10, momentum=0.0):
|
||||
super(GHMCLoss, self).__init__()
|
||||
self.bins = bins
|
||||
self.momentum = momentum
|
||||
edges_left = np.array([float(x) / bins for x in range(bins)], dtype=np.float32)
|
||||
self.edges_left = Tensor(edges_left.reshape((bins, 1, 1, 1, 1)))
|
||||
edges_right = np.array([float(x) / bins for x in range(1, bins + 1)], dtype=np.float32)
|
||||
edges_right[-1] += 1e-4
|
||||
self.edges_right = Tensor(edges_right.reshape((bins, 1, 1, 1, 1)))
|
||||
|
||||
if momentum >= 0:
|
||||
self.acc_sum = Parameter(initializer(0, [bins], mstype.float32))
|
||||
|
||||
self.abs = ops.Abs()
|
||||
self.log = ops.Log()
|
||||
self.cast = ops.Cast()
|
||||
self.select = ops.Select()
|
||||
self.reshape = ops.Reshape()
|
||||
self.reduce_sum = ops.ReduceSum()
|
||||
self.max = ops.Maximum()
|
||||
self.less = ops.Less()
|
||||
self.equal = ops.Equal()
|
||||
self.greater = ops.Greater()
|
||||
self.logical_and = ops.LogicalAnd()
|
||||
self.greater_equal = ops.GreaterEqual()
|
||||
self.zeros_like = ops.ZerosLike()
|
||||
self.expand_dims = ops.ExpandDims()
|
||||
|
||||
def construct(self, out, target):
|
||||
"""GHM loss for classification"""
|
||||
g = self.abs(out - target)
|
||||
g = self.expand_dims(g, 0) # (1, b, c, h, w)
|
||||
|
||||
pos_inds = self.cast(self.equal(target, 1.0), mstype.float32)
|
||||
tot = self.max(self.reduce_sum(pos_inds, ()), 1.0)
|
||||
|
||||
# (bin, b, c, h, w)
|
||||
inds_mask = self.logical_and(self.greater_equal(g, self.edges_left), self.less(g, self.edges_right))
|
||||
zero_matrix = self.cast(self.zeros_like(inds_mask), mstype.float32)
|
||||
inds = self.cast(inds_mask, mstype.float32)
|
||||
# (bins,)
|
||||
num_in_bin = self.reduce_sum(inds, (1, 2, 3, 4))
|
||||
valid_bins = self.greater(num_in_bin, 0)
|
||||
num_valid_bin = self.reduce_sum(self.cast(valid_bins, mstype.float32), ())
|
||||
|
||||
if self.momentum > 0:
|
||||
self.acc_sum = self.select(valid_bins,
|
||||
self.momentum * self.acc_sum + (1 - self.momentum) * num_in_bin,
|
||||
self.acc_sum)
|
||||
acc_sum = self.acc_sum
|
||||
acc_sum = self.reshape(acc_sum, (self.bins, 1, 1, 1, 1))
|
||||
acc_sum = acc_sum + zero_matrix
|
||||
weights = self.select(self.equal(inds, 1), tot / acc_sum, zero_matrix)
|
||||
# (b, c, h, w)
|
||||
weights = self.reduce_sum(weights, 0)
|
||||
else:
|
||||
num_in_bin = self.reshape(num_in_bin, (self.bins, 1, 1, 1, 1))
|
||||
num_in_bin = num_in_bin + zero_matrix
|
||||
weights = self.select(self.equal(inds, 1), tot / num_in_bin, zero_matrix)
|
||||
# (b, c, h, w)
|
||||
weights = self.reduce_sum(weights, 0)
|
||||
|
||||
weights = weights / num_valid_bin
|
||||
|
||||
ghmc_loss = (target - 1.0) * self.log(1.0 - out) - target * self.log(out)
|
||||
ghmc_loss = self.reduce_sum(ghmc_loss * weights, ()) / tot
|
||||
return ghmc_loss
|
||||
|
||||
|
||||
class GHMRLoss(nn.Cell):
|
||||
"""
|
||||
Warpper for gradient harmonizing loss for regression.
|
||||
|
||||
Args:
|
||||
bins(int): Number of bins. Default: 10.
|
||||
momentum(float): Momentum for moving gradient density. Default: 0.0.
|
||||
mu(float): Super parameter for smoothed l1 loss. Default: 0.02.
|
||||
|
||||
Returns:
|
||||
Tensor, GHM loss for regression.
|
||||
"""
|
||||
def __init__(self, bins=10, momentum=0.0, mu=0.02):
|
||||
super(GHMRLoss, self).__init__()
|
||||
self.bins = bins
|
||||
self.momentum = momentum
|
||||
self.mu = mu
|
||||
edges_left = np.array([float(x) / bins for x in range(bins)], dtype=np.float32)
|
||||
self.edges_left = Tensor(edges_left.reshape((bins, 1, 1, 1, 1)))
|
||||
edges_right = np.array([float(x) / bins for x in range(1, bins + 1)], dtype=np.float32)
|
||||
edges_right[-1] += 1e-4
|
||||
self.edges_right = Tensor(edges_right.reshape((bins, 1, 1, 1, 1)))
|
||||
|
||||
if momentum >= 0:
|
||||
self.acc_sum = Parameter(initializer(0, [bins], mstype.float32))
|
||||
|
||||
self.abs = ops.Abs()
|
||||
self.sqrt = ops.Sqrt()
|
||||
self.cast = ops.Cast()
|
||||
self.select = ops.Select()
|
||||
self.reshape = ops.Reshape()
|
||||
self.reduce_sum = ops.ReduceSum()
|
||||
self.max = ops.Maximum()
|
||||
self.less = ops.Less()
|
||||
self.equal = ops.Equal()
|
||||
self.greater = ops.Greater()
|
||||
self.logical_and = ops.LogicalAnd()
|
||||
self.greater_equal = ops.GreaterEqual()
|
||||
self.zeros_like = ops.ZerosLike()
|
||||
self.expand_dims = ops.ExpandDims()
|
||||
|
||||
def construct(self, out, target):
|
||||
"""GHM loss for regression"""
|
||||
# ASL1 loss
|
||||
diff = out - target
|
||||
# gradient length
|
||||
g = self.abs(diff / self.sqrt(self.mu * self.mu + diff * diff))
|
||||
g = self.expand_dims(g, 0) # (1, b, c, h, w)
|
||||
|
||||
pos_inds = self.cast(self.equal(target, 1.0), mstype.float32)
|
||||
tot = self.max(self.reduce_sum(pos_inds, ()), 1.0)
|
||||
|
||||
# (bin, b, c, h, w)
|
||||
inds_mask = self.logical_and(self.greater_equal(g, self.edges_left), self.less(g, self.edges_right))
|
||||
zero_matrix = self.cast(self.zeros_like(inds_mask), mstype.float32)
|
||||
inds = self.cast(inds_mask, mstype.float32)
|
||||
# (bins,)
|
||||
num_in_bin = self.reduce_sum(inds, (1, 2, 3, 4))
|
||||
valid_bins = self.greater(num_in_bin, 0)
|
||||
num_valid_bin = self.reduce_sum(self.cast(valid_bins, mstype.float32), ())
|
||||
|
||||
if self.momentum > 0:
|
||||
self.acc_sum = self.select(valid_bins,
|
||||
self.momentum * self.acc_sum + (1 - self.momentum) * num_in_bin,
|
||||
self.acc_sum)
|
||||
acc_sum = self.acc_sum
|
||||
acc_sum = self.reshape(acc_sum, (self.bins, 1, 1, 1, 1))
|
||||
acc_sum = acc_sum + zero_matrix
|
||||
weights = self.select(self.equal(inds, 1), tot / acc_sum, zero_matrix)
|
||||
# (b, c, h, w)
|
||||
weights = self.reduce_sum(weights, 0)
|
||||
else:
|
||||
num_in_bin = self.reshape(num_in_bin, (self.bins, 1, 1, 1, 1))
|
||||
num_in_bin = num_in_bin + zero_matrix
|
||||
weights = self.select(self.equal(inds, 1), tot / num_in_bin, zero_matrix)
|
||||
# (b, c, h, w)
|
||||
weights = self.reduce_sum(weights, 0)
|
||||
|
||||
weights = weights / num_valid_bin
|
||||
|
||||
ghmr_loss = self.sqrt(diff * diff + self.mu * self.mu) - self.mu
|
||||
ghmr_loss = self.reduce_sum(ghmr_loss * weights, ()) / tot
|
||||
return ghmr_loss
|
||||
|
||||
|
||||
class RegLoss(nn.Cell): #reg_l1_loss
|
||||
"""
|
||||
Warpper for regression loss.
|
||||
|
@ -458,31 +231,6 @@ class RegLoss(nn.Cell): #reg_l1_loss
|
|||
return regr_loss
|
||||
|
||||
|
||||
class RegWeightedL1Loss(nn.Cell):
|
||||
"""
|
||||
Warpper for weighted regression loss.
|
||||
|
||||
Args: None
|
||||
|
||||
Returns:
|
||||
Tensor, regression loss.
|
||||
"""
|
||||
def __init__(self):
|
||||
super(RegWeightedL1Loss, self).__init__()
|
||||
self.reduce_sum = ops.ReduceSum()
|
||||
self.gather_feature = TransposeGatherFeature()
|
||||
self.cast = ops.Cast()
|
||||
self.l1_loss = nn.L1Loss(reduction='sum')
|
||||
|
||||
def construct(self, output, mask, ind, target):
|
||||
pred = self.gather_feature(output, ind)
|
||||
mask = self.cast(mask, mstype.float32)
|
||||
num = self.reduce_sum(mask, ())
|
||||
loss = self.l1_loss(pred * mask, target * mask)
|
||||
loss = loss / (num + 1e-4)
|
||||
return loss
|
||||
|
||||
|
||||
class LossCallBack(Callback):
|
||||
"""
|
||||
Monitor the loss in training.
|
||||
|
|
|
@ -22,15 +22,34 @@ import random
|
|||
import cv2
|
||||
import numpy as np
|
||||
import pycocotools.coco as COCO
|
||||
from .config import dataset_config as data_cfg
|
||||
from .config import eval_config as eval_cfg
|
||||
from .model_utils.config import eval_config as eval_cfg
|
||||
from .image import get_affine_transform, affine_transform
|
||||
|
||||
|
||||
coco_class_name2id = {'person': 1, 'bicycle': 2, 'car': 3, 'motorcycle': 4, 'airplane': 5,
|
||||
'bus': 6, 'train': 7, 'truck': 8, 'boat': 9, 'traffic light': 10,
|
||||
'fire hydrant': 11, 'stop sign': 13, 'parking meter': 14, 'bench': 15,
|
||||
'bird': 16, 'cat': 17, 'dog': 18, 'horse': 19, 'sheep': 20, 'cow': 21,
|
||||
'elephant': 22, 'bear': 23, 'zebra': 24, 'giraffe': 25, 'backpack': 27,
|
||||
'umbrella': 28, 'handbag': 31, 'tie': 32, 'suitcase': 33, 'frisbee': 34,
|
||||
'skis': 35, 'snowboard': 36, 'sports ball': 37, 'kite': 38, 'baseball bat': 39,
|
||||
'baseball glove': 40, 'skateboard': 41, 'surfboard': 42, 'tennis racket': 43,
|
||||
'bottle': 44, 'wine glass': 46, 'cup': 47, 'fork': 48, 'knife': 49, 'spoon': 50,
|
||||
'bowl': 51, 'banana': 52, 'apple': 53, 'sandwich': 54, 'orange': 55, 'broccoli': 56,
|
||||
'carrot': 57, 'hot dog': 58, 'pizza': 59, 'donut': 60, 'cake': 61, 'chair': 62,
|
||||
'couch': 63, 'potted plant': 64, 'bed': 65, 'dining table': 67, 'toilet': 70,
|
||||
'tv': 72, 'laptop': 73, 'mouse': 74, 'remote': 75, 'keyboard': 76, 'cell phone': 77,
|
||||
'microwave': 78, 'oven': 79, 'toaster': 80, 'sink': 81, 'refrigerator': 82,
|
||||
'book': 84, 'clock': 85, 'vase': 86, 'scissors': 87, 'teddy bear': 88,
|
||||
'hair drier': 89, 'toothbrush': 90}
|
||||
|
||||
|
||||
def coco_box_to_bbox(box):
|
||||
"""convert height/width to position coordinates"""
|
||||
bbox = np.array([box[0], box[1], box[0] + box[2], box[1] + box[3]], dtype=np.float32)
|
||||
return bbox
|
||||
|
||||
|
||||
def resize_image(image, anns, width, height):
|
||||
"""resize image to specified scale"""
|
||||
h, w = image.shape[0], image.shape[1]
|
||||
|
@ -121,7 +140,7 @@ def visual_image(img, annos, save_path, ratio=None, height=None, width=None, nam
|
|||
num_objects = len(annos)
|
||||
name_list = []
|
||||
id_list = []
|
||||
for class_name, class_id in data_cfg.coco_class_name2id.items():
|
||||
for class_name, class_id in coco_class_name2id.items():
|
||||
name_list.append(class_name)
|
||||
id_list.append(class_id)
|
||||
|
||||
|
|
|
@ -17,7 +17,6 @@ Train CenterNet and get network model files(.ckpt)
|
|||
"""
|
||||
|
||||
import os
|
||||
import argparse
|
||||
import mindspore.communication.management as D
|
||||
from mindspore.communication.management import get_rank
|
||||
from mindspore import context
|
||||
|
@ -29,55 +28,22 @@ from mindspore.nn.optim import Adam
|
|||
from mindspore import log as logger
|
||||
from mindspore.common import set_seed
|
||||
from mindspore.profiler import Profiler
|
||||
|
||||
from src.dataset import COCOHP
|
||||
from src.centernet_det import CenterNetLossCell, CenterNetWithLossScaleCell
|
||||
from src.centernet_det import CenterNetWithoutLossScaleCell
|
||||
from src.utils import LossCallBack, CenterNetPolynomialDecayLR, CenterNetMultiEpochsDecayLR
|
||||
from src.config import dataset_config, net_config, train_config
|
||||
from src.model_utils.config import config, dataset_config, net_config, train_config
|
||||
from src.model_utils.moxing_adapter import moxing_wrapper
|
||||
from src.model_utils.device_adapter import get_device_id, get_rank_id, get_device_num
|
||||
|
||||
|
||||
_current_dir = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
parser = argparse.ArgumentParser(description='CenterNet training')
|
||||
parser.add_argument('--device_target', type=str, default='Ascend', choices=['Ascend', 'CPU'],
|
||||
help='device where the code will be implemented. (Default: Ascend)')
|
||||
parser.add_argument("--distribute", type=str, default="true", choices=["true", "false"],
|
||||
help="Run distribute, default is true.")
|
||||
parser.add_argument("--need_profiler", type=str, default="false", choices=["true", "false"],
|
||||
help="Profiling to parsing runtime info, default is false.")
|
||||
parser.add_argument("--profiler_path", type=str, default=" ", help="The path to save profiling data")
|
||||
parser.add_argument("--epoch_size", type=int, default="1", help="Epoch size, default is 1.")
|
||||
parser.add_argument("--train_steps", type=int, default=-1, help="Training Steps, default is -1,"
|
||||
"i.e. run all steps according to epoch number.")
|
||||
parser.add_argument("--device_id", type=int, default=0, help="Device id, default is 0.")
|
||||
parser.add_argument("--device_num", type=int, default=1, help="Use device nums, default is 1.")
|
||||
parser.add_argument("--enable_save_ckpt", type=str, default="true", choices=["true", "false"],
|
||||
help="Enable save checkpoint, default is true.")
|
||||
parser.add_argument("--do_shuffle", type=str, default="true", choices=["true", "false"],
|
||||
help="Enable shuffle for dataset, default is true.")
|
||||
parser.add_argument("--enable_data_sink", type=str, default="true", choices=["true", "false"],
|
||||
help="Enable data sink, default is true.")
|
||||
parser.add_argument("--data_sink_steps", type=int, default="-1", help="Sink steps for each epoch, default is -1.")
|
||||
parser.add_argument("--save_checkpoint_path", type=str, default="", help="Save checkpoint path")
|
||||
parser.add_argument("--load_checkpoint_path", type=str, default="", help="Load checkpoint file path")
|
||||
parser.add_argument("--save_checkpoint_steps", type=int, default=1000, help="Save checkpoint steps, default is 1000.")
|
||||
parser.add_argument("--save_checkpoint_num", type=int, default=1, help="Save checkpoint numbers, default is 1.")
|
||||
parser.add_argument("--mindrecord_dir", type=str, default="", help="Mindrecord dataset files directory")
|
||||
parser.add_argument("--mindrecord_prefix", type=str, default="coco_det.train.mind",
|
||||
help="Prefix of MindRecord dataset filename.")
|
||||
parser.add_argument("--save_result_dir", type=str, default="", help="The path to save the predict results")
|
||||
|
||||
args_opt = parser.parse_args()
|
||||
|
||||
|
||||
def _set_parallel_all_reduce_split():
|
||||
"""set centernet all_reduce fusion split"""
|
||||
if net_config.last_level == 5:
|
||||
context.set_auto_parallel_context(all_reduce_fusion_config=[16, 56, 96, 136, 175])
|
||||
elif net_config.last_level == 6:
|
||||
context.set_auto_parallel_context(all_reduce_fusion_config=[18, 59, 100, 141, 182])
|
||||
else:
|
||||
raise ValueError("The total num of allreduced grads for last level = {} is unknown,"
|
||||
"please re-split after known the true value".format(net_config.last_level))
|
||||
context.set_auto_parallel_context(all_reduce_fusion_config=[18, 59, 100, 141, 182])
|
||||
|
||||
|
||||
def _get_params_groups(network, optimizer):
|
||||
|
@ -101,7 +67,7 @@ def _get_optimizer(network, dataset_size):
|
|||
lr_schedule = CenterNetPolynomialDecayLR(learning_rate=train_config.PolyDecay.learning_rate,
|
||||
end_learning_rate=train_config.PolyDecay.end_learning_rate,
|
||||
warmup_steps=train_config.PolyDecay.warmup_steps,
|
||||
decay_steps=args_opt.train_steps,
|
||||
decay_steps=config.train_steps,
|
||||
power=train_config.PolyDecay.power)
|
||||
optimizer = Adam(group_params, learning_rate=lr_schedule, eps=train_config.PolyDecay.eps, loss_scale=1.0)
|
||||
elif train_config.lr_schedule == "MultiDecay":
|
||||
|
@ -109,7 +75,7 @@ def _get_optimizer(network, dataset_size):
|
|||
if not isinstance(multi_epochs, (list, tuple)):
|
||||
raise TypeError("multi_epochs must be list or tuple.")
|
||||
if not multi_epochs:
|
||||
multi_epochs = [args_opt.epoch_size]
|
||||
multi_epochs = [config.epoch_size]
|
||||
lr_schedule = CenterNetMultiEpochsDecayLR(learning_rate=train_config.MultiDecay.learning_rate,
|
||||
warmup_steps=train_config.MultiDecay.warmup_steps,
|
||||
multi_epochs=multi_epochs,
|
||||
|
@ -125,78 +91,85 @@ def _get_optimizer(network, dataset_size):
|
|||
return optimizer
|
||||
|
||||
|
||||
def modelarts_pre_process():
|
||||
"""modelarts pre process function."""
|
||||
config.mindrecord_dir = config.data_path
|
||||
config.save_checkpoint_path = os.path.join(config.output_path, config.save_checkpoint_path)
|
||||
|
||||
|
||||
@moxing_wrapper(pre_process=modelarts_pre_process)
|
||||
def train():
|
||||
"""training CenterNet"""
|
||||
context.set_context(mode=context.GRAPH_MODE, device_target=args_opt.device_target)
|
||||
context.set_context(mode=context.GRAPH_MODE, device_target=config.device_target)
|
||||
context.set_context(reserve_class_name_in_scope=False)
|
||||
context.set_context(save_graphs=False)
|
||||
|
||||
ckpt_save_dir = args_opt.save_checkpoint_path
|
||||
ckpt_save_dir = config.save_checkpoint_path
|
||||
rank = 0
|
||||
device_num = 1
|
||||
num_workers = 8
|
||||
if args_opt.device_target == "Ascend":
|
||||
if config.device_target == "Ascend":
|
||||
context.set_context(enable_auto_mixed_precision=False)
|
||||
context.set_context(device_id=args_opt.device_id)
|
||||
if args_opt.distribute == "true":
|
||||
context.set_context(device_id=get_device_id())
|
||||
if config.distribute == "true":
|
||||
D.init()
|
||||
device_num = args_opt.device_num
|
||||
rank = args_opt.device_id % device_num
|
||||
ckpt_save_dir = args_opt.save_checkpoint_path + 'ckpt_' + str(get_rank()) + '/'
|
||||
device_num = get_device_num()
|
||||
rank = get_rank_id()
|
||||
ckpt_save_dir = config.save_checkpoint_path + 'ckpt_' + str(get_rank()) + '/'
|
||||
|
||||
context.reset_auto_parallel_context()
|
||||
context.set_auto_parallel_context(parallel_mode=ParallelMode.DATA_PARALLEL, gradients_mean=True,
|
||||
device_num=device_num)
|
||||
_set_parallel_all_reduce_split()
|
||||
else:
|
||||
args_opt.distribute = "false"
|
||||
args_opt.need_profiler = "false"
|
||||
args_opt.enable_data_sink = "false"
|
||||
config.distribute = "false"
|
||||
config.need_profiler = "false"
|
||||
config.enable_data_sink = "false"
|
||||
|
||||
# Start create dataset!
|
||||
# mindrecord files will be generated at args_opt.mindrecord_dir such as centernet.mindrecord0, 1, ... file_num.
|
||||
logger.info("Begin creating dataset for CenterNet")
|
||||
coco = COCOHP(dataset_config, run_mode="train", net_opt=net_config, save_path=args_opt.save_result_dir)
|
||||
dataset = coco.create_train_dataset(args_opt.mindrecord_dir, args_opt.mindrecord_prefix,
|
||||
coco = COCOHP(dataset_config, run_mode="train", net_opt=net_config, save_path=config.save_result_dir)
|
||||
dataset = coco.create_train_dataset(config.mindrecord_dir, config.mindrecord_prefix,
|
||||
batch_size=train_config.batch_size, device_num=device_num, rank=rank,
|
||||
num_parallel_workers=num_workers, do_shuffle=args_opt.do_shuffle == 'true')
|
||||
num_parallel_workers=num_workers, do_shuffle=config.do_shuffle == 'true')
|
||||
dataset_size = dataset.get_dataset_size()
|
||||
logger.info("Create dataset done!")
|
||||
|
||||
net_with_loss = CenterNetLossCell(net_config)
|
||||
|
||||
args_opt.train_steps = args_opt.epoch_size * dataset_size
|
||||
logger.info("train steps: {}".format(args_opt.train_steps))
|
||||
config.train_steps = config.epoch_size * dataset_size
|
||||
logger.info("train steps: {}".format(config.train_steps))
|
||||
|
||||
optimizer = _get_optimizer(net_with_loss, dataset_size)
|
||||
|
||||
enable_static_time = args_opt.device_target == "CPU"
|
||||
callback = [TimeMonitor(args_opt.data_sink_steps), LossCallBack(dataset_size, enable_static_time)]
|
||||
if args_opt.enable_save_ckpt == "true" and args_opt.device_id % min(8, device_num) == 0:
|
||||
config_ck = CheckpointConfig(save_checkpoint_steps=args_opt.save_checkpoint_steps,
|
||||
keep_checkpoint_max=args_opt.save_checkpoint_num)
|
||||
enable_static_time = config.device_target == "CPU"
|
||||
callback = [TimeMonitor(config.data_sink_steps), LossCallBack(dataset_size, enable_static_time)]
|
||||
if config.enable_save_ckpt == "true" and get_device_id() % min(8, device_num) == 0:
|
||||
config_ck = CheckpointConfig(save_checkpoint_steps=config.save_checkpoint_steps,
|
||||
keep_checkpoint_max=config.save_checkpoint_num)
|
||||
ckpoint_cb = ModelCheckpoint(prefix='checkpoint_centernet',
|
||||
directory=None if ckpt_save_dir == "" else ckpt_save_dir, config=config_ck)
|
||||
callback.append(ckpoint_cb)
|
||||
|
||||
if args_opt.load_checkpoint_path:
|
||||
param_dict = load_checkpoint(args_opt.load_checkpoint_path)
|
||||
if config.load_checkpoint_path:
|
||||
param_dict = load_checkpoint(config.load_checkpoint_path)
|
||||
load_param_into_net(net_with_loss, param_dict)
|
||||
if args_opt.device_target == "Ascend":
|
||||
if config.device_target == "Ascend":
|
||||
net_with_grads = CenterNetWithLossScaleCell(net_with_loss, optimizer=optimizer,
|
||||
sens=train_config.loss_scale_value)
|
||||
else:
|
||||
net_with_grads = CenterNetWithoutLossScaleCell(net_with_loss, optimizer=optimizer)
|
||||
|
||||
model = Model(net_with_grads)
|
||||
model.train(args_opt.epoch_size, dataset, callbacks=callback,
|
||||
dataset_sink_mode=(args_opt.enable_data_sink == "true"), sink_size=args_opt.data_sink_steps)
|
||||
model.train(config.epoch_size, dataset, callbacks=callback,
|
||||
dataset_sink_mode=(config.enable_data_sink == "true"), sink_size=config.data_sink_steps)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
if args_opt.need_profiler == "true":
|
||||
profiler = Profiler(output_path=args_opt.profiler_path)
|
||||
if config.need_profiler == "true":
|
||||
profiler = Profiler(output_path=config.profiler_path)
|
||||
set_seed(317)
|
||||
train()
|
||||
if args_opt.need_profiler == "true":
|
||||
if config.need_profiler == "true":
|
||||
profiler.analyse()
|
||||
|
|
Loading…
Reference in New Issue