forked from mindspore-Ecosystem/mindspore
fix en docs problem.
This commit is contained in:
parent
5b3a5787be
commit
0d2331b629
|
@ -258,7 +258,7 @@ Boost能够自动加速网络,如减少BN/梯度冻结/累积梯度等。
|
|||
|
||||
**返回:**
|
||||
|
||||
number,网络训练过程中得到的loss值。
|
||||
Tensor,网络训练过程中得到的loss值。
|
||||
|
||||
.. py:method:: check_adasum_enable()
|
||||
|
||||
|
@ -289,7 +289,7 @@ Boost能够自动加速网络,如减少BN/梯度冻结/累积梯度等。
|
|||
|
||||
**返回:**
|
||||
|
||||
number,网络训练过程中得到的loss值。
|
||||
Tensor,网络训练过程中得到的loss值。
|
||||
|
||||
.. py:method:: gradient_freeze_process(*inputs)
|
||||
|
||||
|
@ -301,7 +301,7 @@ Boost能够自动加速网络,如减少BN/梯度冻结/累积梯度等。
|
|||
|
||||
**返回:**
|
||||
|
||||
number,网络训练过程中得到的loss值。
|
||||
Tensor,网络训练过程中得到的loss值。
|
||||
|
||||
.. py:class:: mindspore.boost.BoostTrainOneStepWithLossScaleCell(network, optimizer, scale_sense)
|
||||
|
||||
|
@ -437,7 +437,7 @@ Boost能够自动加速网络,如减少BN/梯度冻结/累积梯度等。
|
|||
|
||||
**输出:**
|
||||
|
||||
- **Tuple** (Tensor) - adasum处理后更新的权重。
|
||||
- **adasum_parameters** (Tuple(Tensor)) - adasum处理后更新的权重。
|
||||
|
||||
.. py:class:: mindspore.boost.DimReduce(network, optimizer, weight, pca_mat_local, n_components, rho, gamma, alpha, sigma, rank, rank_size)
|
||||
|
||||
|
|
|
@ -539,6 +539,28 @@
|
|||
|
||||
- **ValueError** – 如果 `dst_type` 不是 `mindspore.dtype.float32` ,也不是 `mindspore.dtype.float16`。
|
||||
|
||||
.. py:method:: set_boost(boost_type)
|
||||
|
||||
为了提升网络性能,可以配置boost内的算法让框架自动使能该算法来加速网络训练。
|
||||
|
||||
请确保 `boost_type` 所选择的算法在
|
||||
`algorithm library <https://gitee.com/mindspore/mindspore/tree/master/mindspore/python/mindspore/boost>`_ 算法库中。
|
||||
|
||||
.. note:: 部分加速算法可能影响网络精度,请谨慎选择。
|
||||
|
||||
**参数:**
|
||||
|
||||
- **boost_type** (str) – 加速算法。
|
||||
|
||||
**返回:**
|
||||
|
||||
Cell类型,Cell本身。
|
||||
|
||||
**异常:**
|
||||
|
||||
- **ValueError** – 如果 `boost_type` 不在boost算法库内。
|
||||
|
||||
|
||||
.. py:method:: trainable_params(recurse=True)
|
||||
|
||||
返回Cell的可训练参数。
|
||||
|
|
|
@ -241,7 +241,7 @@ class BoostTrainOneStepCell(TrainOneStepCell):
|
|||
inputs (tuple(Tensor)): Tuple of input tensors with shape :math:`(N, \ldots)`.
|
||||
|
||||
Outputs:
|
||||
- **loss** (Tensor) - Tensor with shape :math:`()`.
|
||||
- **loss** (Tensor) - Network loss, tensor with shape :math:`()`.
|
||||
"""
|
||||
if self.train_strategy is None:
|
||||
step = self.step
|
||||
|
@ -264,9 +264,10 @@ class BoostTrainOneStepCell(TrainOneStepCell):
|
|||
loss (Tensor): Tensor with shape :math:`()`.
|
||||
grads (tuple(Tensor)): Tuple of gradient tensors.
|
||||
sens (Tensor): Tensor with shape :math:`()`.
|
||||
inputs (tuple(Tensor)): Tuple of input tensors with shape :math:`(N, \ldots)`.
|
||||
|
||||
Outputs:
|
||||
- **loss** (Tensor) - Tensor with shape :math:`()`.
|
||||
- **loss** (Tensor) - Network loss, tensor with shape :math:`()`.
|
||||
"""
|
||||
loss = F.depend(loss, self.hyper_map(F.partial(gradient_accumulation_op, self.max_accumulation_step),
|
||||
self.grad_accumulation, grads))
|
||||
|
@ -296,7 +297,7 @@ class BoostTrainOneStepCell(TrainOneStepCell):
|
|||
grads (tuple(Tensor)): Tuple of gradient tensors.
|
||||
|
||||
Outputs:
|
||||
- **loss** (Tensor) - Tensor with shape :math:`()`.
|
||||
- **loss** (Tensor) - Network loss, tensor with shape :math:`()`.
|
||||
"""
|
||||
loss = F.depend(loss, self.optimizer(grads))
|
||||
rank_weights = self.weights[self.start[self.server_rank]: self.end[self.server_rank]]
|
||||
|
|
Loading…
Reference in New Issue