forked from mindspore-Ecosystem/mindspore
fix typos in maskrcnn and fastercnn
This commit is contained in:
parent
a3d9720620
commit
4282a43732
|
@ -90,7 +90,7 @@ Dataset used: [COCO2017](<https://cocodataset.org/>)
|
|||
train2017/0000001.jpg 0,259,401,459,7 35,28,324,201,2 0,30,59,80,2
|
||||
```
|
||||
|
||||
Each row is an image annotation which split by space, the first column is a relative path of image, the others are box and class infomations of the format [xmin,ymin,xmax,ymax,class]. We read image from an image path joined by the `IMAGE_DIR`(dataset directory) and the relative path in `ANNO_PATH`(the TXT file path), `IMAGE_DIR` and `ANNO_PATH` are setting in `config.py`.
|
||||
Each row is an image annotation which split by space, the first column is a relative path of image, the others are box and class information of the format [xmin,ymin,xmax,ymax,class]. We read image from an image path joined by the `IMAGE_DIR`(dataset directory) and the relative path in `ANNO_PATH`(the TXT file path), `IMAGE_DIR` and `ANNO_PATH` are setting in `config.py`.
|
||||
|
||||
# Quick Start
|
||||
|
||||
|
@ -242,7 +242,7 @@ Notes:
|
|||
|
||||
### Result
|
||||
|
||||
Training result will be stored in the example path, whose folder name begins with "train" or "train_parallel". You can find checkpoint file together with result like the followings in loss_rankid.log.
|
||||
Training result will be stored in the example path, whose folder name begins with "train" or "train_parallel". You can find checkpoint file together with result like the following in loss_rankid.log.
|
||||
|
||||
```log
|
||||
# distribute training result(8p)
|
||||
|
@ -265,10 +265,12 @@ sh run_eval_ascend.sh [VALIDATION_JSON_FILE] [CHECKPOINT_PATH]
|
|||
```
|
||||
|
||||
> checkpoint can be produced in training process.
|
||||
>
|
||||
> Images size in dataset should be equal to the annotation size in VALIDATION_JSON_FILE, otherwise the evaluation result cannot be displayed properly.
|
||||
|
||||
### Result
|
||||
|
||||
Eval result will be stored in the example path, whose folder name is "eval". Under this, you can find result like the followings in log.
|
||||
Eval result will be stored in the example path, whose folder name is "eval". Under this, you can find result like the following in log.
|
||||
|
||||
```log
|
||||
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.360
|
||||
|
|
|
@ -268,6 +268,8 @@ sh run_eval_ascend.sh [VALIDATION_JSON_FILE] [CHECKPOINT_PATH]
|
|||
```
|
||||
|
||||
> 在训练过程中生成检查点。
|
||||
>
|
||||
> 数据集中图片的数量要和VALIDATION_JSON_FILE文件中标记数量一致,否则精度结果展示格式可能出现异常。
|
||||
|
||||
### 结果
|
||||
|
||||
|
|
|
@ -59,7 +59,7 @@ int DvppCommon::DeInit(void) {
|
|||
|
||||
ret = acldvppDestroyChannel(dvppChannelDesc_);
|
||||
if (ret != OK) {
|
||||
std::cout << "Failed to destory dvpp channel, ret = " << ret << "." << std::endl;
|
||||
std::cout << "Failed to destroy dvpp channel, ret = " << ret << "." << std::endl;
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -646,7 +646,7 @@ int DvppCommon::CombineJpegdProcess(const RawData& imageInfo, acldvppPixelFormat
|
|||
return ret;
|
||||
}
|
||||
|
||||
// In TransferImageH2D function, device buffer will be alloced to store the input image
|
||||
// In TransferImageH2D function, device buffer will be allocated to store the input image
|
||||
// Need to pay attention to release of the buffer
|
||||
ret = TransferImageH2D(imageInfo, inputImage_);
|
||||
if (ret != OK) {
|
||||
|
|
|
@ -24,7 +24,7 @@ from mindspore.common.tensor import Tensor
|
|||
|
||||
class ROIAlign(nn.Cell):
|
||||
"""
|
||||
Extract RoI features from mulitple feature map.
|
||||
Extract RoI features from multiple feature map.
|
||||
|
||||
Args:
|
||||
out_size_h (int) - RoI height.
|
||||
|
@ -59,7 +59,7 @@ class SingleRoIExtractor(nn.Cell):
|
|||
"""
|
||||
Extract RoI features from a single level feature map.
|
||||
|
||||
If there are mulitple input feature levels, each RoI is mapped to a level
|
||||
If there are multiple input feature levels, each RoI is mapped to a level
|
||||
according to its scale.
|
||||
|
||||
Args:
|
||||
|
|
|
@ -431,7 +431,7 @@ bash run_distribute_train.sh [RANK_TABLE_FILE] [PRETRAINED_MODEL]
|
|||
|
||||
### [Training Result](#content)
|
||||
|
||||
Training result will be stored in the example path, whose folder name begins with "train" or "train_parallel". You can find checkpoint file together with result like the followings in loss_rankid.log.
|
||||
Training result will be stored in the example path, whose folder name begins with "train" or "train_parallel". You can find checkpoint file together with result like the following in loss_rankid.log.
|
||||
|
||||
```bash
|
||||
# distribute training result(8p)
|
||||
|
@ -457,10 +457,12 @@ bash run_eval.sh [VALIDATION_ANN_FILE_JSON] [CHECKPOINT_PATH]
|
|||
|
||||
> As for the COCO2017 dataset, VALIDATION_ANN_FILE_JSON is refer to the annotations/instances_val2017.json in the dataset directory.
|
||||
> checkpoint can be produced and saved in training process, whose folder name begins with "train/checkpoint" or "train_parallel*/checkpoint".
|
||||
>
|
||||
> Images size in dataset should be equal to the annotation size in VALIDATION_ANN_FILE_JSON, otherwise the evaluation result cannot be displayed properly.
|
||||
|
||||
### [Evaluation result](#content)
|
||||
|
||||
Inference result will be stored in the example path, whose folder name is "eval". Under this, you can find result like the followings in log.
|
||||
Inference result will be stored in the example path, whose folder name is "eval". Under this, you can find result like the following in log.
|
||||
|
||||
```bash
|
||||
Evaluate annotation type *bbox*
|
||||
|
|
|
@ -455,6 +455,8 @@ sh run_eval.sh [VALIDATION_ANN_FILE_JSON] [CHECKPOINT_PATH]
|
|||
|
||||
> 关于COCO2017数据集,VALIDATION_ANN_FILE_JSON参考数据集目录下的annotations/instances_val2017.json文件。
|
||||
> 检查点可在训练过程中生成并保存,其文件夹名称以“train/checkpoint”或“train_parallel*/checkpoint”开头。
|
||||
>
|
||||
> 数据集中图片的数量要和VALIDATION_ANN_FILE_JSON文件中标记数量一致,否则精度结果展示格式可能出现异常。
|
||||
|
||||
### 评估结果
|
||||
|
||||
|
|
|
@ -59,7 +59,7 @@ int DvppCommon::DeInit(void) {
|
|||
|
||||
ret = acldvppDestroyChannel(dvppChannelDesc_);
|
||||
if (ret != OK) {
|
||||
std::cout << "Failed to destory dvpp channel, ret = " << ret << "." << std::endl;
|
||||
std::cout << "Failed to destroy dvpp channel, ret = " << ret << "." << std::endl;
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -646,7 +646,7 @@ int DvppCommon::CombineJpegdProcess(const RawData& imageInfo, acldvppPixelFormat
|
|||
return ret;
|
||||
}
|
||||
|
||||
// In TransferImageH2D function, device buffer will be alloced to store the input image
|
||||
// In TransferImageH2D function, device buffer will be allocated to store the input image
|
||||
// Need to pay attention to release of the buffer
|
||||
ret = TransferImageH2D(imageInfo, inputImage_);
|
||||
if (ret != OK) {
|
||||
|
|
|
@ -23,7 +23,7 @@ import mindspore.common.dtype as mstype
|
|||
|
||||
class BboxAssignSample(nn.Cell):
|
||||
"""
|
||||
Bbox assigner and sampler defination.
|
||||
Bbox assigner and sampler definition.
|
||||
|
||||
Args:
|
||||
config (dict): Config.
|
||||
|
|
|
@ -24,7 +24,7 @@ from mindspore.common.tensor import Tensor
|
|||
|
||||
class ROIAlign(nn.Cell):
|
||||
"""
|
||||
Extract RoI features from mulitple feature map.
|
||||
Extract RoI features from mulitiple feature map.
|
||||
|
||||
Args:
|
||||
out_size_h (int) - RoI height.
|
||||
|
@ -61,7 +61,7 @@ class SingleRoIExtractor(nn.Cell):
|
|||
"""
|
||||
Extract RoI features from a single level feature map.
|
||||
|
||||
If there are mulitple input feature levels, each RoI is mapped to a level
|
||||
If there are multiple input feature levels, each RoI is mapped to a level
|
||||
according to its scale.
|
||||
|
||||
Args:
|
||||
|
|
|
@ -208,7 +208,7 @@ Usage: sh run_standalone_train.sh [PRETRAINED_MODEL]
|
|||
"neg_iou_thr": 0.3, # negative sample threshold after IOU
|
||||
"pos_iou_thr": 0.7, # positive sample threshold after IOU
|
||||
"min_pos_iou": 0.3, # minimal positive sample threshold after IOU
|
||||
"num_bboxes": 245520, # total bbox numner
|
||||
"num_bboxes": 245520, # total bbox number
|
||||
"num_gts": 128, # total ground truth number
|
||||
"num_expected_neg": 256, # negative sample number
|
||||
"num_expected_pos": 128, # positive sample number
|
||||
|
@ -220,7 +220,7 @@ Usage: sh run_standalone_train.sh [PRETRAINED_MODEL]
|
|||
# roi_alignj
|
||||
"roi_layer": dict(type='RoIAlign', out_size=7, mask_out_size=14, sample_num=2), # ROIAlign parameters
|
||||
"roi_align_out_channels": 256, # ROIAlign out channels size
|
||||
"roi_align_featmap_strides": [4, 8, 16, 32], # stride size for differnt level of ROIAling feature map
|
||||
"roi_align_featmap_strides": [4, 8, 16, 32], # stride size for different level of ROIAling feature map
|
||||
"roi_align_finest_scale": 56, # finest scale ofr ROIAlign
|
||||
"roi_sample_num": 640, # sample number in ROIAling layer
|
||||
|
||||
|
@ -338,7 +338,7 @@ sh run_distribute_train.sh [RANK_TABLE_FILE] [PRETRAINED_MODEL]
|
|||
|
||||
### [Training Result](#content)
|
||||
|
||||
Training result will be stored in the example path, whose folder name begins with "train" or "train_parallel". You can find checkpoint file together with result like the followings in loss_rankid.log.
|
||||
Training result will be stored in the example path, whose folder name begins with "train" or "train_parallel". You can find checkpoint file together with result like the following in loss_rankid.log.
|
||||
|
||||
```bash
|
||||
# distribute training result(8p)
|
||||
|
@ -369,7 +369,7 @@ sh run_eval.sh [VALIDATION_ANN_FILE_JSON] [CHECKPOINT_PATH]
|
|||
|
||||
### [Evaluation result](#content)
|
||||
|
||||
Inference result will be stored in the example path, whose folder name is "eval". Under this, you can find result like the followings in log.
|
||||
Inference result will be stored in the example path, whose folder name is "eval". Under this, you can find result like the following in log.
|
||||
|
||||
```bash
|
||||
Evaluate annotation type *bbox*
|
||||
|
|
|
@ -23,7 +23,7 @@ import mindspore.common.dtype as mstype
|
|||
|
||||
class BboxAssignSample(nn.Cell):
|
||||
"""
|
||||
Bbox assigner and sampler defination.
|
||||
Bbox assigner and sampler definition.
|
||||
|
||||
Args:
|
||||
config (dict): Config.
|
||||
|
|
|
@ -22,7 +22,7 @@ from mindspore.common.tensor import Tensor
|
|||
|
||||
class BboxAssignSampleForRcnn(nn.Cell):
|
||||
"""
|
||||
Bbox assigner and sampler defination.
|
||||
Bbox assigner and sampler definition.
|
||||
|
||||
Args:
|
||||
config (dict): Config.
|
||||
|
|
|
@ -24,7 +24,7 @@ from mindspore.common.tensor import Tensor
|
|||
|
||||
class ROIAlign(nn.Cell):
|
||||
"""
|
||||
Extract RoI features from mulitple feature map.
|
||||
Extract RoI features from multiple feature map.
|
||||
|
||||
Args:
|
||||
out_size_h (int) - RoI height.
|
||||
|
@ -61,7 +61,7 @@ class SingleRoIExtractor(nn.Cell):
|
|||
"""
|
||||
Extract RoI features from a single level feature map.
|
||||
|
||||
If there are mulitple input feature levels, each RoI is mapped to a level
|
||||
If there are multiple input feature levels, each RoI is mapped to a level
|
||||
according to its scale.
|
||||
|
||||
Args:
|
||||
|
|
Loading…
Reference in New Issue