!3310 move deeplabv3 and resnext50 from model_zoo to model_zoo/official/cv

Merge pull request !3310 from z00378171/master
This commit is contained in:
mindspore-ci-bot 2020-07-22 15:10:39 +08:00 committed by Gitee
commit de14b851e1
43 changed files with 89 additions and 8 deletions

View File

@ -1,11 +1,16 @@
# Deeplab-V3 Example # DeeplabV3 Example
## Description ## Description
This is an example of training DeepLabv3 with PASCAL VOC 2012 dataset in MindSpore. This is an example of training DeepLabV3 with PASCAL VOC 2012 dataset in MindSpore.
## Requirements ## Requirements
- Install [MindSpore](https://www.mindspore.cn/install/en). - Install [MindSpore](https://www.mindspore.cn/install/en).
- Download the VOC 2012 dataset for training. - Download the VOC 2012 dataset for training.
- We need to run `./src/remove_gt_colormap.py` to remove the label colormap.
``` bash
python remove_gt_colormap.py --original_gt_folder GT_FOLDER --output_dir OUTPUT_DIR
```
> Notes: > Notes:
If you are running a fine-tuning or evaluation task, prepare the corresponding checkpoint file. If you are running a fine-tuning or evaluation task, prepare the corresponding checkpoint file.
@ -30,7 +35,7 @@ Set options in evaluation_config.py. Make sure the 'data_file' and 'finetune_ckp
``` ```
## Options and Parameters ## Options and Parameters
It contains of parameters of Deeplab-V3 model and options for training, which is set in file config.py. It contains of parameters of DeeplabV3 model and options for training, which is set in file config.py.
### Options: ### Options:
``` ```

View File

@ -0,0 +1,76 @@
# Copyright 2020 The Huawei Authors All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Removes the color map from segmentation annotations.
Removes the color map from the ground truth segmentation annotations and save
the results to output_dir.
"""
import glob
import argparse
import os.path
import numpy as np
from PIL import Image
def _remove_colormap(filename):
"""Removes the color map from the annotation.
Args:
filename: Ground truth annotation filename.
Returns:
Annotation without color map.
"""
return np.array(Image.open(filename))
def _save_annotation(annotation, filename):
"""Saves the annotation as png file.
Args:
annotation: Segmentation annotation.
filename: Output filename.
"""
pil_image = Image.fromarray(annotation.astype(dtype=np.uint8))
pil_image.save(filename, 'PNG')
def main():
parser = argparse.ArgumentParser(description="Demo of argparse")
parser.add_argument('--original_gt_folder', type=str, default='./VOCdevkit/VOC2012/SegmentationClass',
help='Original ground truth annotations.')
parser.add_argument('--segmentation_format', type=str, default='png',
help='Segmentation format.')
parser.add_argument('--output_dir', type=str, default='./VOCdevkit/VOC2012/SegmentationClassRaw',
help='folder to save modified ground truth annotations.')
args = parser.parse_args()
# Create the output directory if not exists.
if not os.path.isdir(args.output_dir):
os.mkdir(args.output_dir)
annotations = glob.glob(os.path.join(args.original_gt_folder,
'*.' + args.segmentation_format))
for annotation in annotations:
raw_annotation = _remove_colormap(annotation)
filename = os.path.basename(annotation)[:-4]
_save_annotation(raw_annotation,
os.path.join(
args.output_dir,
filename + '.' + args.segmentation_format))
if __name__ == '__main__':
main()

View File

@ -2,12 +2,12 @@
## Description ## Description
This is an example of training ResNext50 with ImageNet dataset in Mindspore. This is an example of training ResNext50 in MindSpore.
## Requirements ## Requirements
- Install [Mindspore](http://www.mindspore.cn/install/en). - Install [Mindspore](http://www.mindspore.cn/install/en).
- Downlaod the dataset ImageNet2012. - Downlaod the dataset.
## Structure ## Structure
@ -91,9 +91,9 @@ sh run_standalone_train.sh DEVICE_ID DATA_PATH
```bash ```bash
# distributed training example(8p) # distributed training example(8p)
sh scripts/run_distribute_train.sh MINDSPORE_HCCL_CONFIG_PATH /ImageNet/train sh scripts/run_distribute_train.sh MINDSPORE_HCCL_CONFIG_PATH /dataset/train
# standalone training example # standalone training example
sh scripts/run_standalone_train.sh 0 /ImageNet_Original/train sh scripts/run_standalone_train.sh 0 /dataset/train
``` ```
#### Result #### Result
@ -123,6 +123,6 @@ sh scripts/run_eval.sh 0 /opt/npu/datasets/classification/val /resnext50_100.ckp
Evaluation result will be stored in the scripts path. Under this, you can find result like the followings in log. Evaluation result will be stored in the scripts path. Under this, you can find result like the followings in log.
``` ```
acc=78,16%(TOP1) acc=78.16%(TOP1)
acc=93.88%(TOP5) acc=93.88%(TOP5)
``` ```