forked from mindspore-Ecosystem/mindspore
Modify config, NCF hasn't support GPU yet
This commit is contained in:
parent
de4b9a94fc
commit
fb7dc22d2e
|
@ -608,9 +608,9 @@ The command above will run in the background, you can view training logs in ner_
|
||||||
If you choose F1 as assessment method, the result will be as follows:
|
If you choose F1 as assessment method, the result will be as follows:
|
||||||
|
|
||||||
```text
|
```text
|
||||||
Precision 0.920507
|
Precision 0.868245
|
||||||
Recall 0.948683
|
Recall 0.865611
|
||||||
F1 0.920507
|
F1 0.866926
|
||||||
```
|
```
|
||||||
|
|
||||||
#### evaluation on msra dataset when running on Ascend
|
#### evaluation on msra dataset when running on Ascend
|
||||||
|
|
|
@ -572,9 +572,9 @@ bash scripts/run_ner.sh
|
||||||
如您选择F1作为评估方法,可得到如下结果:
|
如您选择F1作为评估方法,可得到如下结果:
|
||||||
|
|
||||||
```text
|
```text
|
||||||
Precision 0.920507
|
Precision 0.868245
|
||||||
Recall 0.948683
|
Recall 0.865611
|
||||||
F1 0.920507
|
F1 0.866926
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Ascend处理器上运行后评估msra数据集
|
#### Ascend处理器上运行后评估msra数据集
|
||||||
|
|
|
@ -127,7 +127,7 @@ large_net_cfg:
|
||||||
num_hidden_layers: 24
|
num_hidden_layers: 24
|
||||||
num_attention_heads: 16
|
num_attention_heads: 16
|
||||||
intermediate_size: 4096
|
intermediate_size: 4096
|
||||||
hidden_act: "gelu"
|
hidden_act: "fast_gelu"
|
||||||
hidden_dropout_prob: 0.1
|
hidden_dropout_prob: 0.1
|
||||||
attention_probs_dropout_prob: 0.1
|
attention_probs_dropout_prob: 0.1
|
||||||
max_position_embeddings: 512
|
max_position_embeddings: 512
|
||||||
|
@ -171,4 +171,4 @@ enable_save_ckpt: ["true", "false"]
|
||||||
enable_lossscale: ["true", "false"]
|
enable_lossscale: ["true", "false"]
|
||||||
do_shuffle: ["true", "false"]
|
do_shuffle: ["true", "false"]
|
||||||
enable_data_sink: ["true", "false"]
|
enable_data_sink: ["true", "false"]
|
||||||
allreduce_post_accumulation: ["true", "false"]
|
allreduce_post_accumulation: ["true", "false"]
|
||||||
|
|
|
@ -78,8 +78,8 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil
|
||||||
|
|
||||||
# [Environment Requirements](#contents)
|
# [Environment Requirements](#contents)
|
||||||
|
|
||||||
- Hardware(Ascend/GPU)
|
- Hardware(Ascend)
|
||||||
- Prepare hardware environment with Ascend or GPU processor.
|
- Prepare hardware environment with Ascend.
|
||||||
- Framework
|
- Framework
|
||||||
- [MindSpore](https://www.mindspore.cn/install/en)
|
- [MindSpore](https://www.mindspore.cn/install/en)
|
||||||
- For more information, please check the resources below:
|
- For more information, please check the resources below:
|
||||||
|
@ -308,7 +308,7 @@ Inference result is saved in current path, you can find result like this in acc.
|
||||||
|
|
||||||
### Inference
|
### Inference
|
||||||
|
|
||||||
If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/migrate_3rd_scripts.html). Following the steps below, this is a simple example:
|
If you need to use the trained model to perform inference on multiple hardware platforms, such as Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/migrate_3rd_scripts.html). Following the steps below, this is a simple example:
|
||||||
|
|
||||||
<https://www.mindspore.cn/tutorial/inference/en/master/multi_platform_inference.html>
|
<https://www.mindspore.cn/tutorial/inference/en/master/multi_platform_inference.html>
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue