!13256 fix word spell mistakes

From: @zhouneng2
Reviewed-by: @liangchenghui,@oacjiewen
Signed-off-by: @liangchenghui
This commit is contained in:
mindspore-ci-bot 2021-03-15 16:39:03 +08:00 committed by Gitee
commit f75e607e7b
3 changed files with 8 additions and 8 deletions

View File

@ -76,20 +76,20 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil
└─run_eval.sh # launch evaluating
├─src
├─backbone
├─_init_.py # initalize
├─_init_.py # initialize
├─resnet.py # resnext50 backbone
├─utils
├─_init_.py # initalize
├─_init_.py # initialize
├─cunstom_op.py # network operation
├─logging.py # print log
├─optimizers_init_.py # get parameters
├─sampler.py # distributed sampler
├─var_init_.py # calculate gain value
├─_init_.py # initalize
├─_init_.py # initialize
├─config.py # parameter configuration
├─crossentropy.py # CrossEntropy loss function
├─dataset.py # data preprocessing
├─head.py # commom head
├─head.py # common head
├─image_classification.py # get resnet
├─linear_warmup.py # linear warmup learning rate
├─warmup_cosine_annealing.py # learning rate each step
@ -140,7 +140,7 @@ You can start training by python script:
python train.py --data_dir ~/imagenet/train/ --platform Ascend --is_distributed 0
```
or shell stript:
or shell script:
```script
Ascend:
@ -181,7 +181,7 @@ You can start training by python script:
python eval.py --data_dir ~/imagenet/val/ --platform Ascend --pretrained resnext.ckpt
```
or shell stript:
or shell script:
```script
# Evaluation

View File

@ -22,7 +22,7 @@ __all__ = ['CommonHead']
class CommonHead(nn.Cell):
"""
commom architecture definition.
common architecture definition.
Args:
num_classes (int): Number of classes.

View File

@ -161,7 +161,7 @@ def parse_args(cloud_args=None):
if args.is_dynamic_loss_scale == 1:
args.loss_scale = 1 # for dynamic loss scale can not set loss scale in momentum opt
# select for master rank save ckpt or all rank save, compatiable for model parallel
# select for master rank save ckpt or all rank save, compatible for model parallel
args.rank_save_ckpt_flag = 0
if args.is_save_on_master:
if args.rank == 0: