forked from mindspore-Ecosystem/mindspore
!4666 fix yolov3_darknet53_quant&yolov3-resenet18&ssd net README bug
Merge pull request !4666 from chengxb7532/master
This commit is contained in:
commit
b32c5c551e
|
@ -66,7 +66,7 @@ To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will ge
|
|||
sh run_distribute_train.sh 8 500 0.2 coco /data/hccl.json
|
||||
```
|
||||
|
||||
The input parameters are device numbers, epoch size, learning rate, dataset mode and [hccl json configuration file](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training.html). **It is better to use absolute path.**
|
||||
The input parameters are device numbers, epoch size, learning rate, dataset mode and [hccl json configuration file](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools). **It is better to use absolute path.**
|
||||
|
||||
You will get the loss value of each step as following:
|
||||
|
||||
|
|
|
@ -71,7 +71,7 @@ sh run_distribute_train.sh dataset/coco2014 yolov3_darknet_noquant_ckpt/0-320_10
|
|||
sh run_standalone_train.sh dataset/coco2014 yolov3_darknet_noquant_ckpt/0-320_102400.ckpt
|
||||
```
|
||||
|
||||
> About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training.html).
|
||||
> About rank_table.json, You can generate it by using the [hccl json configuration file](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools).
|
||||
|
||||
#### Result
|
||||
|
||||
|
@ -108,14 +108,14 @@ epoch[134], iter[86500], loss:34.303755, 145.18 imgs/sec, lr:1.6245529650404933e
|
|||
|
||||
```
|
||||
# infer
|
||||
sh run_eval.sh [DATASET_PATH] [CHECKPOINT_PATH]
|
||||
sh run_eval.sh [DATASET_PATH] [CHECKPOINT_PATH] [DEVICE_ID]
|
||||
```
|
||||
|
||||
#### Launch
|
||||
|
||||
```bash
|
||||
# infer with checkpoint
|
||||
sh run_eval.sh dataset/coco2014/ checkpoint/0-135.ckpt
|
||||
sh run_eval.sh dataset/coco2014/ checkpoint/0-131.ckpt 0
|
||||
|
||||
```
|
||||
|
||||
|
|
|
@ -51,7 +51,7 @@ To train the model, run `train.py` with the dataset `image_dir`, `anno_path` and
|
|||
sh run_distribute_train.sh 8 150 /data/Mindrecord_train /data /data/train.txt /data/hccl.json
|
||||
```
|
||||
|
||||
The input variables are device numbers, epoch size, mindrecord directory path, dataset directory path, train TXT file path and [hccl json configuration file](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training.html). **It is better to use absolute path.**
|
||||
The input variables are device numbers, epoch size, mindrecord directory path, dataset directory path, train TXT file path and [hccl json configuration file](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools). **It is better to use absolute path.**
|
||||
|
||||
You will get the loss value and time of each step as following:
|
||||
|
||||
|
|
Loading…
Reference in New Issue