!48035 whitelist for O1

Merge pull request !48035 from 于振华/code_docs_whitelist_O1_0118
This commit is contained in:
i-robot 2023-01-19 06:16:51 +00:00 committed by Gitee
commit 3f47353a77
No known key found for this signature in database
GPG Key ID: 173E9B9CA92EEF8F
3 changed files with 5 additions and 1 deletions

View File

@ -19,7 +19,7 @@
- **amp_level** (str) - `mindspore.build_train_network` 的可选参数 `level` `level` 为混合精度等级,该参数支持["O0", "O1", "O2", "O3", "auto"]。默认值:"O0"。 - **amp_level** (str) - `mindspore.build_train_network` 的可选参数 `level` `level` 为混合精度等级,该参数支持["O0", "O1", "O2", "O3", "auto"]。默认值:"O0"。
- "O0": 不变化。 - "O0": 不变化。
- "O1": 将白名单中的算子转为float16剩余算子保持float32。 - "O1": 将白名单中的算子转为float16剩余算子保持float32。白名单中的算子如下列表:[Conv1d, Conv2d, Conv3d, Conv1dTranspose, Conv2dTranspose, Conv3dTranspose, Dense, LSTMCell, RNNCell, GRUCell, MatMul, BatchMatMul, PReLU, ReLU, Ger]。
- "O2": 将网络精度转为float16BatchNorm保持float32精度使用动态调整损失缩放系数loss scale的策略。 - "O2": 将网络精度转为float16BatchNorm保持float32精度使用动态调整损失缩放系数loss scale的策略。
- "O3": 将网络精度包括BatchNorm转为float16不使用损失缩放策略。 - "O3": 将网络精度包括BatchNorm转为float16不使用损失缩放策略。
- auto: 为不同处理器设置专家推荐的混合精度等级如在GPU上设为"O2"在Ascend上设为"O3"。该设置方式可能在部分场景下不适用,建议用户根据具体的网络模型自定义设置 `amp_level` - auto: 为不同处理器设置专家推荐的混合精度等级如在GPU上设为"O2"在Ascend上设为"O3"。该设置方式可能在部分场景下不适用,建议用户根据具体的网络模型自定义设置 `amp_level`

View File

@ -374,6 +374,8 @@ def build_train_network(network, optimizer, loss_fn=None, level='O0', boost_leve
- "O0": Do not change. - "O0": Do not change.
- "O1": Cast the operators in white_list to float16, the remaining operators are kept in float32. - "O1": Cast the operators in white_list to float16, the remaining operators are kept in float32.
The operators in the whitelist: [Conv1d, Conv2d, Conv3d, Conv1dTranspose, Conv2dTranspose,
Conv3dTranspose, Dense, LSTMCell, RNNCell, GRUCell, MatMul, BatchMatMul, PReLU, ReLU, Ger].
- "O2": Cast network to float16, keep batchnorm and `loss_fn` (if set) run in float32, - "O2": Cast network to float16, keep batchnorm and `loss_fn` (if set) run in float32,
using dynamic loss scale. using dynamic loss scale.
- "O3": Cast network to float16, with additional property `keep_batchnorm_fp32=False` . - "O3": Cast network to float16, with additional property `keep_batchnorm_fp32=False` .

View File

@ -132,6 +132,8 @@ class Model:
- "O0": Do not change. - "O0": Do not change.
- "O1": Cast the operators in white_list to float16, the remaining operators are kept in float32. - "O1": Cast the operators in white_list to float16, the remaining operators are kept in float32.
The operators in the whitelist: [Conv1d, Conv2d, Conv3d, Conv1dTranspose, Conv2dTranspose,
Conv3dTranspose, Dense, LSTMCell, RNNCell, GRUCell, MatMul, BatchMatMul, PReLU, ReLU, Ger].
- "O2": Cast network to float16, keep BatchNorm run in float32, using dynamic loss scale. - "O2": Cast network to float16, keep BatchNorm run in float32, using dynamic loss scale.
- "O3": Cast network to float16, the BatchNorm is also cast to float16, loss scale will not be used. - "O3": Cast network to float16, the BatchNorm is also cast to float16, loss scale will not be used.
- auto: Set level to recommended level in different devices. Set level to "O2" on GPU, set - auto: Set level to recommended level in different devices. Set level to "O2" on GPU, set