forked from mindspore-Ecosystem/mindspore
!27782 Add a description of lossScalemanager to the Model comment
Merge pull request !27782 from wangnan39/code_docs_model_kwargs
This commit is contained in:
commit
9979c6a694
|
@ -17,9 +17,12 @@
|
|||
|
||||
- O0: 无变化。
|
||||
- O2: 将网络精度转为float16,batchnorm保持float32精度,使用动态调整梯度放大系数(loss scale)的策略。
|
||||
- O3: 将网络精度转为float16,并为 `mindspore.build_train_network` 接口配置属性 `keep_batchnorm_fp32=False`。
|
||||
- auto: 为不同处理器设置专家推荐的混合精度等级,如在GPU上设为O2,在Ascend上设为O3。该设置方式不适用于所有场景,建议用户根据具体的网络模型自定义设置 `amp_level` 。在GPU上建议使用O2,在Ascend上建议使用O3。关于 `amp_level` 详见 `mindpore.build_train_network`。
|
||||
- O3: 将网络精度(包括batchnorm)转为float16,不使用梯度调整策略。
|
||||
- auto: 为不同处理器设置专家推荐的混合精度等级,如在GPU上设为O2,在Ascend上设为O3。该设置方式可能在部分场景下不适用,建议用户根据具体的网络模型自定义设置 `amp_level` 。
|
||||
|
||||
在GPU上建议使用O2,在Ascend上建议使用O3。
|
||||
通过`kwargs`设置`keep_batchnorm_fp32`,可修改batchnorm策略,`keep_batchnorm_fp32`必须为bool类型;通过`kwargs`设置`loss_scale_manager`可修改梯度放大策略,`loss_scale_manager`必须为:class:`mindspore.LossScaleManager`的子类,
|
||||
关于 `amp_level` 详见 `mindpore.build_train_network`。
|
||||
|
||||
**样例:**
|
||||
|
||||
|
|
|
@ -153,8 +153,8 @@ def build_train_network(network, optimizer, loss_fn=None, level='O0', boost_leve
|
|||
level to O3 Ascend. The recommended level is chosen by the export experience, cannot
|
||||
always general. User should specify the level for special network.
|
||||
|
||||
O2 is recommended on GPU, O3 is recommended on Ascend.Property of `keep_batchnorm_fp32` , `cast_model_type`
|
||||
and `loss_scale_manager` determined by `level` setting may be overwritten by settings in `kwargs` .
|
||||
O2 is recommended on GPU, O3 is recommended on Ascend. Property of `keep_batchnorm_fp32`, `cast_model_type`
|
||||
and `loss_scale_manager` determined by `level` setting may be overwritten by settings in `kwargs`.
|
||||
|
||||
boost_level (str): Option for argument `level` in `mindspore.boost` , level for boost mode
|
||||
training. Supports ["O0", "O1", "O2"]. Default: "O0".
|
||||
|
|
|
@ -140,14 +140,17 @@ class Model:
|
|||
|
||||
- O0: Do not change.
|
||||
- O2: Cast network to float16, keep batchnorm run in float32, using dynamic loss scale.
|
||||
- O3: Cast network to float16 and add property `keep_batchnorm_fp32=False` to
|
||||
:func:`mindspore.build_train_network`.
|
||||
- O3: Cast network to float16, the batchnorm is also cast to float16, loss scale will not be used.
|
||||
- auto: Set level to recommended level in different devices. Set level to O2 on GPU, set
|
||||
level to O3 on Ascend. The recommended level is chosen by the export experience, not applicable to all
|
||||
scenarios. User should specify the level for special network.
|
||||
|
||||
O2 is recommended on GPU, O3 is recommended on Ascend. The more detailed explanation of `amp_level` setting
|
||||
can be found at `mindspore.build_train_network`.
|
||||
O2 is recommended on GPU, O3 is recommended on Ascend.
|
||||
The batchnorm strategy can be changed by `keep_batchnorm_fp32` settings in `kwargs`. `keep_batchnorm_fp32`
|
||||
must be a bool. The loss scale strategy can be changed by `loss_scale_manager` setting in `kwargs`.
|
||||
`loss_scale_manager` should be a subclass of :class:`mindspore.LossScaleManager`.
|
||||
The more detailed explanation of `amp_level` setting can be found at `mindspore.build_train_network`.
|
||||
|
||||
boost_level (str): Option for argument `level` in `mindspore.boost`, level for boost mode
|
||||
training. Supports ["O0", "O1", "O2"]. Default: "O0".
|
||||
|
||||
|
|
Loading…
Reference in New Issue