!17929 [ME][Compiler]python API standardlize of amp.py and model.py

From: @chenfei52
Reviewed-by: @zh_qh,@ginfung
Signed-off-by: @zh_qh
This commit is contained in:
mindspore-ci-bot 2021-06-08 09:25:39 +08:00 committed by Gitee
commit 4f0f918404
2 changed files with 17 additions and 21 deletions

View File

@ -126,26 +126,26 @@ def build_train_network(network, optimizer, loss_fn=None, level='O0', **kwargs):
- O0: Do not change.
- O2: Cast network to float16, keep batchnorm and `loss_fn` (if set) run in float32,
using dynamic loss scale.
- O3: Cast network to float16, with additional property 'keep_batchnorm_fp32=False'.
- O3: Cast network to float16, with additional property `keep_batchnorm_fp32=False`.
- auto: Set to level to recommended level in different devices. Set level to O2 on GPU, Set
level to O3 Ascend. The recommended level is choose by the export experience, cannot
always general. User should specify the level for special network.
O2 is recommended on GPU, O3 is recommended on Ascend.Property of 'keep_batchnorm_fp32' , 'cast_model_type'
and 'loss_scale_manager' determined by 'level' setting may be overwritten by settings in 'kwargs'.
O2 is recommended on GPU, O3 is recommended on Ascend.Property of `keep_batchnorm_fp32` , `cast_model_type`
and `loss_scale_manager` determined by `level` setting may be overwritten by settings in `kwargs`.
cast_model_type (:class:`mindspore.dtype`): Supports `mstype.float16` or `mstype.float32`.If set, the network
will be casted to 'cast_model_type'(`mstype.float16` or `mstype.float32`), but not to be casted to the type
determined by 'level' setting.
keep_batchnorm_fp32 (bool): Keep Batchnorm run in `float32` when the network is set to cast to 'float16'.
If set, the 'level' setting will take no effect on this property.
will be casted to `cast_model_type`(`mstype.float16` or `mstype.float32`), but not to be casted to the type
determined by `level` setting.
keep_batchnorm_fp32 (bool): Keep Batchnorm run in `float32` when the network is set to cast to `float16`.
If set, the `level` setting will take no effect on this property.
loss_scale_manager (Union[None, LossScaleManager]): If None, not scale the loss, otherwise scale the loss by
`LossScaleManager`. If set, the 'level' setting will take no effect on this property.
`LossScaleManager`. If set, the `level` setting will take no effect on this property.
Raises:
1.Auto mixed precision only supported on device 'GPU' and 'Ascend'.If device is cpu, a 'ValueError' exception
1.Auto mixed precision only supported on device GPU and Ascend.If device is CPU, a `ValueError` exception
will be raised.
2.If device is 'CPU', property `loss_scale_manager` only can be set as None or FixedLossScaleManager(with
property `drop_overflow_update`=False), or a 'ValueError' exception will be raised.
2.If device is CPU, property `loss_scale_manager` only can be set as `None` or `FixedLossScaleManager`(with
property `drop_overflow_update=False`), or a `ValueError` exception will be raised.
"""
validator.check_value_type('network', network, nn.Cell)
validator.check_value_type('optimizer', optimizer, (nn.Optimizer, acc.FreezeOpt))

View File

@ -75,24 +75,20 @@ class Model:
elements, including the positions of loss value, predicted value and label. The loss
value would be passed to the `Loss` metric, the predicted value and label would be passed
to other metric. Default: None.
Args:
amp_level (str): Option for argument `level` in `mindspore.amp.build_train_network`, level for mixed
precision training. Supports ["O0", "O2", "O3", "auto"]. Default: "O0".
- O0: Do not change.
- O2: Cast network to float16, keep batchnorm run in float32, using dynamic loss scale.
- O3: Cast network to float16, with additional property 'keep_batchnorm_fp32=False'.
- O3: Cast network to float16, with additional property `keep_batchnorm_fp32=False`.
- auto: Set to level to recommended level in different devices. Set level to O2 on GPU, Set
level to O3 Ascend. The recommended level is choose by the export experience, cannot
always generalize. User should specify the level for special network.
O2 is recommended on GPU, O3 is recommended on Ascend.
loss_scale_manager (Union[None, LossScaleManager]): If it is None, the loss would not be scaled. Otherwise,
scale the loss by LossScaleManager and optimizer can not be None.It is a key argument.
e.g. Use `loss_scale_manager=None` to set the value.
keep_batchnorm_fp32 (bool): Keep Batchnorm running in `float32`. If it is set to true, the level setting before
will be overwritten. Default: True.
always general. User should specify the level for special network.
O2 is recommended on GPU, O3 is recommended on Ascend.The more detailed explanation of `amp_level` setting
can be found at `mindspore.amp.build_train_network`.
Examples:
>>> from mindspore import Model, nn
>>>