!21704 update bert readme

Merge pull request !21704 from chenhaozhe/code_docs_update_bert_readme
This commit is contained in:
i-robot 2021-08-17 07:25:43 +00:00 committed by Gitee
commit c936fa706e
3 changed files with 12 additions and 8 deletions

View File

@ -654,8 +654,10 @@ The result will be as follows:
- Export on local
We only support export with fine-tuned downstream task model and yaml config file, because the pretrained model is useless in inferences task.
```shell
python export.py --config_path [../../*.yaml] --ckpt_file [CKPT_PATH] --file_name [FILE_NAME] --file_format [FILE_FORMAT]
python export.py --config_path [../../*.yaml] --export_ckpt_file [CKPT_PATH] --export_file_name [FILE_NAME] --file_format [FILE_FORMAT]
```
- Export on ModelArts (If you want to run in modelarts, please check the official documentation of [modelarts](https://support.huaweicloud.com/modelarts/), and you can start as follows)
@ -686,8 +688,7 @@ python export.py --config_path [../../*.yaml] --ckpt_file [CKPT_PATH] --file_nam
# You will see bert_ner.mindir under {Output file path}.
```
The ckpt_file parameter is required,
`EXPORT_FORMAT` should be in ["AIR", "MINDIR"]
The `export_ckpt_file` parameter is required, and `file_format` should be in ["AIR", "MINDIR"]
### [Inference Process](#contents)
@ -799,4 +800,3 @@ Refer to the [ModelZoo FAQ](https://gitee.com/mindspore/mindspore/tree/master/mo
- **Q: Why the training process failed with error for the shape can not match?**
**A**: This is usually caused by the config `seq_length` of model can't match the dataset. You could check and modified the `seq_length` in yaml config according to the dataset you used.
The parameter of model won't change with `seq_length`, the shapes of parameter only depends on model config `max_position_embeddings`.

View File

@ -613,10 +613,12 @@ bash scripts/squad.sh
## 导出mindir模型
由于预训练模型通常没有应用场景需要经过下游任务的finetune之后才能使用所以当前仅支持使用下游任务模型和yaml配置文件进行export操作。
- 在本地导出
```shell
python export.py --config_path [../../*.yaml] --ckpt_file [CKPT_PATH] --file_name [FILE_NAME] --file_format [FILE_FORMAT]
python export.py --config_path [../../*.yaml] --export_ckpt_file [CKPT_PATH] --export_file_name [FILE_NAME] --file_format [FILE_FORMAT]
```
- 在ModelArts上导出
@ -647,7 +649,7 @@ python export.py --config_path [../../*.yaml] --ckpt_file [CKPT_PATH] --file_nam
# 你将在{Output file path}下看到 'bert_ner.mindir'文件
```
参数`ckpt_file` 是必需的,`EXPORT_FORMAT` 必须在 ["AIR", "MINDIR"]中进行选择。
参数`export_ckpt_file` 是必需的,`file_format` 必须在 ["AIR", "MINDIR"]中进行选择。
## 推理过程
@ -753,4 +755,4 @@ run_pretrain.py中设置了随机种子确保分布式训练中每个节点
**A** 持续溢出通常是因为使用了较高的学习率导致训练不收敛。可以考虑修改yaml配置文件中的参数调低`learning_rate`来降低初始学习率或提高`power`加速学习率衰减。
- **Q: 运行报错shape不匹配是什么问题**
**A** Bert模型中的shape不匹配通常是因为模型参数配置和使用的数据集规格不匹配主要是句长问题可以考虑修改`seq_length`参数来匹配所使用的具体数据集。改变该参数不影响权重的规格,权重的规格仅与`max_position_embeddings`参数有关。
**A** Bert模型中的shape不匹配通常是因为模型参数配置和使用的数据集规格不匹配主要是句长问题可以考虑修改`seq_length`参数来匹配所使用的具体数据集。改变该参数不影响权重的规格,权重的规格仅与`max_position_embeddings`参数有关。

View File

@ -23,7 +23,9 @@ exit 1
fi
get_real_path(){
if [ "${1:0:1}" == "/" ]; then
if [ -z "$1" ]; then
echo ""
elif [ "${1:0:1}" == "/" ]; then
echo "$1"
else
echo "$(realpath -m $PWD/$1)"