!27782 Add a description of lossScalemanager to the Model comment

Merge pull request !27782 from wangnan39/code_docs_model_kwargs
This commit is contained in:
i-robot 2021-12-17 03:44:16 +00:00 committed by Gitee
commit 9979c6a694
3 changed files with 14 additions and 8 deletions

View File

@ -17,9 +17,12 @@
- O0: 无变化。
- O2: 将网络精度转为float16batchnorm保持float32精度使用动态调整梯度放大系数loss scale的策略。
- O3: 将网络精度转为float16并为 `mindspore.build_train_network` 接口配置属性 `keep_batchnorm_fp32=False`
- auto: 为不同处理器设置专家推荐的混合精度等级如在GPU上设为O2在Ascend上设为O3。该设置方式不适用于所有场景,建议用户根据具体的网络模型自定义设置 `amp_level`在GPU上建议使用O2在Ascend上建议使用O3。关于 `amp_level` 详见 `mindpore.build_train_network`
- O3: 将网络精度包括batchnorm转为float16不使用梯度调整策略
- auto: 为不同处理器设置专家推荐的混合精度等级如在GPU上设为O2在Ascend上设为O3。该设置方式可能在部分场景下不适用,建议用户根据具体的网络模型自定义设置 `amp_level`
在GPU上建议使用O2在Ascend上建议使用O3。
通过`kwargs`设置`keep_batchnorm_fp32`可修改batchnorm策略`keep_batchnorm_fp32`必须为bool类型通过`kwargs`设置`loss_scale_manager`可修改梯度放大策略,`loss_scale_manager`必须为:class:`mindspore.LossScaleManager`的子类,
关于 `amp_level` 详见 `mindpore.build_train_network`
**样例:**

View File

@ -153,8 +153,8 @@ def build_train_network(network, optimizer, loss_fn=None, level='O0', boost_leve
level to O3 Ascend. The recommended level is chosen by the export experience, cannot
always general. User should specify the level for special network.
O2 is recommended on GPU, O3 is recommended on Ascend.Property of `keep_batchnorm_fp32` , `cast_model_type`
and `loss_scale_manager` determined by `level` setting may be overwritten by settings in `kwargs` .
O2 is recommended on GPU, O3 is recommended on Ascend. Property of `keep_batchnorm_fp32`, `cast_model_type`
and `loss_scale_manager` determined by `level` setting may be overwritten by settings in `kwargs`.
boost_level (str): Option for argument `level` in `mindspore.boost` , level for boost mode
training. Supports ["O0", "O1", "O2"]. Default: "O0".

View File

@ -140,14 +140,17 @@ class Model:
- O0: Do not change.
- O2: Cast network to float16, keep batchnorm run in float32, using dynamic loss scale.
- O3: Cast network to float16 and add property `keep_batchnorm_fp32=False` to
:func:`mindspore.build_train_network`.
- O3: Cast network to float16, the batchnorm is also cast to float16, loss scale will not be used.
- auto: Set level to recommended level in different devices. Set level to O2 on GPU, set
level to O3 on Ascend. The recommended level is chosen by the export experience, not applicable to all
scenarios. User should specify the level for special network.
O2 is recommended on GPU, O3 is recommended on Ascend. The more detailed explanation of `amp_level` setting
can be found at `mindspore.build_train_network`.
O2 is recommended on GPU, O3 is recommended on Ascend.
The batchnorm strategy can be changed by `keep_batchnorm_fp32` settings in `kwargs`. `keep_batchnorm_fp32`
must be a bool. The loss scale strategy can be changed by `loss_scale_manager` setting in `kwargs`.
`loss_scale_manager` should be a subclass of :class:`mindspore.LossScaleManager`.
The more detailed explanation of `amp_level` setting can be found at `mindspore.build_train_network`.
boost_level (str): Option for argument `level` in `mindspore.boost`, level for boost mode
training. Supports ["O0", "O1", "O2"]. Default: "O0".