code_docs_cnapi2
This commit is contained in:
parent
6ab8ddfd3a
commit
99382eb743
|
@ -0,0 +1,50 @@
|
|||
Class mindspore.nn.InverseDecayLR(learning_rate, decay_rate, decay_steps, is_stair=False)
|
||||
|
||||
基于逆时衰减函数计算学习率。
|
||||
|
||||
对于当前step,计算decayed_learning_rate[current\_step]的公式为:
|
||||
|
||||
.. math::
|
||||
decayed\_learning\_rate[current\_step] = learning\_rate / (1 + decay\_rate * p)
|
||||
|
||||
其中,
|
||||
|
||||
.. math::
|
||||
p = \frac{current\_step}{decay\_steps}
|
||||
|
||||
如果`is_stair`为True,则公式为:
|
||||
|
||||
.. math::
|
||||
p = floor(\frac{current\_step}{decay\_steps})
|
||||
|
||||
参数:
|
||||
learning_rate (float):学习率的初始值。
|
||||
decay_rate (float):衰减率。
|
||||
decay_steps (int):用于计算衰减学习率的值。
|
||||
is_stair (bool):如果为True,则学习率每`decay_steps`次衰减一次。默认值:False。
|
||||
|
||||
输入:
|
||||
- **global_step** (Tensor):当前step数。
|
||||
|
||||
输出:
|
||||
Tensor。当前step的学习率值,shape为:math:`()`。
|
||||
|
||||
异常:
|
||||
TypeError:`learning_rate`或`decay_rate`不是float。
|
||||
TypeError:`decay_steps`不是int或`is_stair`不是bool。
|
||||
ValueError:`decay_steps`小于1。
|
||||
ValueError:`learning_rate`或`decay_rate`小于或等于0。
|
||||
|
||||
支持平台:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
示例:
|
||||
>>> learning_rate = 0.1
|
||||
>>> decay_rate = 0.9
|
||||
>>> decay_steps = 4
|
||||
>>> global_step = Tensor(2, mstype.int32)
|
||||
>>> inverse_decay_lr = nn.InverseDecayLR(learning_rate, decay_rate, decay_steps, True)
|
||||
>>> result = inverse_decay_lr(global_step)
|
||||
>>> print(result)
|
||||
0.1
|
||||
|
|
@ -0,0 +1,50 @@
|
|||
Class mindspore.nn.NaturalExpDecayLR(learning_rate, decay_rate, decay_steps, is_stair=False)
|
||||
|
||||
基于自然指数衰减函数计算学习率。
|
||||
|
||||
对于当前step,计算decayed_learning_rate[current_step]的公式为:
|
||||
|
||||
.. math::
|
||||
decayed\_learning\_rate[current\_step] = learning\_rate * e^{-decay\_rate * p}
|
||||
|
||||
其中,
|
||||
|
||||
.. math::
|
||||
p = \frac{current\_step}{decay\_steps}
|
||||
|
||||
如果`is_stair`为True,则公式为:
|
||||
|
||||
.. math::
|
||||
p = floor(\frac{current\_step}{decay\_steps})
|
||||
|
||||
参数:
|
||||
learning_rate (float):学习率的初始值。
|
||||
decay_rate (float):衰减率。
|
||||
decay_steps (int):用于计算衰减学习率的值。
|
||||
is_stair (bool):如果为True,则学习率每`decay_steps`次衰减一次。默认值:False。
|
||||
|
||||
输入:
|
||||
- **global_step** (Tensor):当前step数。
|
||||
|
||||
输出:
|
||||
Tensor。当前step的学习率值,shape为 :math:`()`。
|
||||
|
||||
异常:
|
||||
TypeError:`learning_rate`或`decay_rate`不是float。
|
||||
TypeError:`decay_steps`不是int或`is_stair`不是bool。
|
||||
ValueError:`decay_steps`小于1。
|
||||
ValueError:`learning_rate`或`decay_rate`小于或等于0。
|
||||
|
||||
支持平台:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
示例:
|
||||
>>> learning_rate = 0.1
|
||||
>>> decay_rate = 0.9
|
||||
>>> decay_steps = 4
|
||||
>>> global_step = Tensor(2, mstype.int32)
|
||||
>>> natural_exp_decay_lr = nn.NaturalExpDecayLR(learning_rate, decay_rate, decay_steps, True)
|
||||
>>> result = natural_exp_decay_lr(global_step)
|
||||
>>> print(result)
|
||||
0.1
|
||||
|
|
@ -0,0 +1,53 @@
|
|||
Class mindspore.nn.PolynomialDecayLR(learning_rate, end_learning_rate, decay_steps, power, update_decay_steps=False)
|
||||
|
||||
基于多项式衰减函数计算学习率。
|
||||
|
||||
对于当前step,计算decayed_learning_rate[current_step]的公式为:
|
||||
|
||||
.. math::
|
||||
decayed\_learning\_rate[current\_step] = (learning\_rate - end\_learning\_rate) *
|
||||
(1 - tmp\_step / tmp\_decay\_steps)^{power} + end\_learning\_rate
|
||||
|
||||
其中,
|
||||
|
||||
.. math::
|
||||
tmp\_step=min(current\_step, decay\_steps)
|
||||
|
||||
如果`update_decay_steps`为true,则每`decay_steps`更新`tmp_decay_step`的值。公式为:
|
||||
|
||||
.. math::
|
||||
tmp\_decay\_steps = decay\_steps * ceil(current\_step / decay\_steps)
|
||||
|
||||
参数:
|
||||
learning_rate (float):学习率的初始值。
|
||||
end_learning_rate (float):学习率的最终值。
|
||||
decay_steps (int):用于计算衰减学习率的值。
|
||||
power (float):用于计算衰减学习率的值。该参数必须大于0。
|
||||
update_decay_steps (bool):如果为True,则学习率每`decay_steps`次衰减一次。默认值:False。
|
||||
|
||||
输入:
|
||||
- **global_step** (Tensor):当前step数。
|
||||
|
||||
输出:
|
||||
Tensor。当前step的学习率值, shape为 :math:`()`。
|
||||
|
||||
异常:
|
||||
TypeError:`learning_rate`,`end_learning_rate`或`power`不是float。
|
||||
TypeError:`decay_steps`不是int或`update_decay_steps`不是bool。
|
||||
ValueError:`end_learning_rate`小于0或`decay_steps`小于1。
|
||||
ValueError:`learning_rate`或`power`小于或等于0。
|
||||
|
||||
支持平台:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
示例:
|
||||
>>> learning_rate = 0.1
|
||||
>>> end_learning_rate = 0.01
|
||||
>>> decay_steps = 4
|
||||
>>> power = 0.5
|
||||
>>> global_step = Tensor(2, mstype.int32)
|
||||
>>> polynomial_decay_lr = nn.PolynomialDecayLR(learning_rate, end_learning_rate, decay_steps, power)
|
||||
>>> result = polynomial_decay_lr(global_step)
|
||||
>>> print(result)
|
||||
0.07363961
|
||||
|
|
@ -0,0 +1,32 @@
|
|||
mindspore.nn.cosine_decay_lr(min_lr, max_lr, total_step, step_per_epoch, decay_epoch)
|
||||
|
||||
基于余弦衰减函数计算学习率。
|
||||
|
||||
对于第i步,计算decayed_learning_rate[i]的公式为:
|
||||
|
||||
.. math::
|
||||
decayed\_learning\_rate[i] = min\_lr + 0.5 * (max\_lr - min\_lr) *
|
||||
(1 + cos(\frac{current\_epoch}{decay\_epoch}\pi))
|
||||
|
||||
其中:math:`current\_epoch=floor(\frac{i}{step\_per\_epoch})`。
|
||||
|
||||
参数:
|
||||
min_lr (float):学习率的最小值。
|
||||
max_lr (float):学习率的最大值。
|
||||
total_step (int):step总数。
|
||||
step_per_epoch (int):每个epoch的step数。
|
||||
decay_epoch (int):用于计算衰减学习率的值。
|
||||
|
||||
返回:
|
||||
list[float]。列表大小为`total_step`。
|
||||
|
||||
示例:
|
||||
>>> min_lr = 0.01
|
||||
>>> max_lr = 0.1
|
||||
>>> total_step = 6
|
||||
>>> step_per_epoch = 2
|
||||
>>> decay_epoch = 2
|
||||
>>> output = cosine_decay_lr(min_lr, max_lr, total_step, step_per_epoch, decay_epoch)
|
||||
>>> print(output)
|
||||
[0.1, 0.1, 0.05500000000000001, 0.05500000000000001, 0.01, 0.01]
|
||||
|
|
@ -0,0 +1,32 @@
|
|||
mindspore.nn.exponential_decay_lr(learning_rate, decay_rate, total_step, step_per_epoch, decay_epoch, is_stair=False)
|
||||
|
||||
基于指数衰减函数计算学习率。
|
||||
|
||||
对于第i步,计算decayed_learning_rate[i]的公式为:
|
||||
|
||||
.. math::
|
||||
decayed\_learning\_rate[i] = learning\_rate * decay\_rate^{\frac{current\_epoch}{decay\_epoch}}
|
||||
|
||||
其中:math:`current\_epoch=floor(\frac{i}{step\_per\_epoch})`。
|
||||
|
||||
参数:
|
||||
learning_rate (float):学习率的初始值。
|
||||
decay_rate (float):衰减率。
|
||||
total_step (int):step总数。
|
||||
step_per_epoch (int):每个 epoch的step数。
|
||||
decay_epoch (int):用于计算衰减学习率的值。
|
||||
is_stair (bool):如果为True,则学习率每`decay_epoch`次衰减一次。默认值:False。
|
||||
|
||||
返回:
|
||||
list[float]。列表的大小为`total_step`。
|
||||
|
||||
示例:
|
||||
>>> learning_rate = 0.1
|
||||
>>> decay_rate = 0.9
|
||||
>>> total_step = 6
|
||||
>>> step_per_epoch = 2
|
||||
>>> decay_epoch = 1
|
||||
>>> output = exponential_decay_lr(learning_rate, decay_rate, total_step, step_per_epoch, decay_epoch)
|
||||
>>> print(output)
|
||||
[0.1, 0.1, 0.09000000000000001, 0.09000000000000001, 0.08100000000000002, 0.08100000000000002]
|
||||
|
|
@ -0,0 +1,32 @@
|
|||
mindspore.nn.inverse_decay_lr(learning_rate, decay_rate, total_step, step_per_epoch, decay_epoch, is_stair=False)
|
||||
|
||||
基于逆时间衰减函数计算学习率。
|
||||
|
||||
对于第i步,计算decayed_learning_rate[i]的公式为:
|
||||
|
||||
.. math::
|
||||
decayed\_learning\_rate[i] = learning\_rate / (1 + decay\_rate * current\_epoch / decay\_epoch)
|
||||
|
||||
其中,:math:`current\_epoch=floor(\frac{i}{step\_per\_epoch})`。
|
||||
|
||||
参数:
|
||||
learning_rate (float):学习率的初始值。
|
||||
decay_rate (float):衰减率。
|
||||
total_step (int):step总数。
|
||||
step_per_epoch (int):每个epoch的step数。
|
||||
decay_epoch (int):用于计算衰减学习率的值。
|
||||
is_stair (bool):如果为True,则学习率每`decay_epoch`次衰减一次。默认值:False。
|
||||
|
||||
返回:
|
||||
list[float]。列表大小为`total_step`。
|
||||
|
||||
示例:
|
||||
>>> learning_rate = 0.1
|
||||
>>> decay_rate = 0.5
|
||||
>>> total_step = 6
|
||||
>>> step_per_epoch = 1
|
||||
>>> decay_epoch = 1
|
||||
>>> output = inverse_decay_lr(learning_rate, decay_rate, total_step, step_per_epoch, decay_epoch, True)
|
||||
>>> print(output)
|
||||
[0.1, 0.06666666666666667, 0.05, 0.04, 0.03333333333333333, 0.028571428571428574]
|
||||
|
|
@ -0,0 +1,32 @@
|
|||
mindspore.nn.natural_exp_decay_lr(learning_rate, decay_rate, total_step, step_per_epoch, decay_epoch, is_stair=False)
|
||||
|
||||
基于自然指数衰减函数计算学习率。
|
||||
|
||||
对于第i步,计算decayed_learning_rate[i]的公式为:
|
||||
|
||||
.. math::
|
||||
decayed\_learning\_rate[i] = learning\_rate * e^{-decay\_rate * current\_epoch}
|
||||
|
||||
其中:math:`current\_epoch=floor(\frac{i}{step\_per\_epoch})`。
|
||||
|
||||
参数:
|
||||
learning_rate (float):学习率的初始值。
|
||||
decay_rate (float):衰减率。
|
||||
total_step (int):step总数。
|
||||
step_per_epoch (int):每个epoch的step数。
|
||||
decay_epoch (int):用于计算衰减学习率的值。
|
||||
is_stair (bool):如果为True,则学习率每`decay_epoch`次衰减一次。默认值:False。
|
||||
|
||||
返回:
|
||||
list[float]. `total_step`表示列表的大小。
|
||||
|
||||
示例:
|
||||
>>> learning_rate = 0.1
|
||||
>>> decay_rate = 0.9
|
||||
>>> total_step = 6
|
||||
>>> step_per_epoch = 2
|
||||
>>> decay_epoch = 2
|
||||
>>> output = natural_exp_decay_lr(learning_rate, decay_rate, total_step, step_per_epoch, decay_epoch, True)
|
||||
>>> print(output)
|
||||
[0.1, 0.1, 0.1, 0.1, 0.016529888822158657, 0.016529888822158657]
|
||||
|
|
@ -0,0 +1,23 @@
|
|||
mindspore.nn.piecewise_constant_lr(milestone, learning_rates)
|
||||
|
||||
获取分段常量学习率。
|
||||
|
||||
通过给定的`milestone`和`learning_rates`计算学习率。设`milestone`的值为:math:`(M_1, M_2, ..., M_t, ..., M_N)`,`learning_rates`的值为:math:`(x_1, x_2, ..., x_t, ..., x_N)`。N是`milestone`的长度。
|
||||
设`y`为输出学习率, 那么对于第i步,计算y[i]的公式为:
|
||||
|
||||
.. math::
|
||||
y[i] = x_t,\ for\ i \in [M_{t-1}, M_t)
|
||||
|
||||
参数:
|
||||
milestone (Union[list[int], tuple[int]]):milestone列表。此列表是一个单调递增的列表。类表中的元素必须大于0。
|
||||
learning_rates (Union[list[float], tuple[float]]):学习率列表。
|
||||
|
||||
返回:
|
||||
list[float]。列表的大小为:math:`M_N`。
|
||||
|
||||
示例:
|
||||
>>> milestone = [2, 5, 10]
|
||||
>>> learning_rates = [0.1, 0.05, 0.01]
|
||||
>>> output = piecewise_constant_lr(milestone, learning_rates)
|
||||
>>> print(output)
|
||||
[0.1, 0.1, 0.05, 0.05, 0.05, 0.01, 0.01, 0.01, 0.01, 0.01]
|
|
@ -0,0 +1,49 @@
|
|||
mindspore.nn.polynomial_decay_lr(learning_rate, end_learning_rate, total_step, step_per_epoch, decay_epoch, power, update_decay_epoch=False)
|
||||
|
||||
基于多项式衰减函数计算学习率。
|
||||
|
||||
对于第i步,计算decayed_learning_rate[i]的公式为:
|
||||
|
||||
.. math::
|
||||
decayed\_learning\_rate[i] = (learning\_rate - end\_learning\_rate) *
|
||||
(1 - tmp\_epoch / tmp\_decay\_epoch)^{power} + end\_learning\_rate
|
||||
|
||||
其中,
|
||||
|
||||
.. math::
|
||||
tmp\_epoch = min(current\_epoch, decay\_epoch)
|
||||
|
||||
.. math::
|
||||
current\_epoch=floor(\frac{i}{step\_per\_epoch})
|
||||
|
||||
.. math::
|
||||
tmp\_decay\_epoch = decay\_epoch
|
||||
|
||||
如果`update_decay_epoch`为True,则每个epoch更新`tmp_decay_epoch`的值。公式为:
|
||||
|
||||
.. math::
|
||||
tmp\_decay\_epoch = decay\_epoch * ceil(current\_epoch / decay\_epoch)
|
||||
|
||||
参数:
|
||||
learning_rate (float):学习率的初始值。
|
||||
end_learning_rate (float):学习率的最终值。
|
||||
total_step (int):step总数。
|
||||
step_per_epoch (int):每个epoch的step数。
|
||||
decay_epoch (int):用于计算衰减学习率的值。
|
||||
power (float):用于计算衰减学习率的值。该参数必须大于0。
|
||||
update_decay_epoch (bool):如果为True,则更新`decay_epoch`。默认值:False。
|
||||
|
||||
返回:
|
||||
list[float]。列表的大小为`total_step`。
|
||||
|
||||
示例:
|
||||
>>> learning_rate = 0.1
|
||||
>>> end_learning_rate = 0.01
|
||||
>>> total_step = 6
|
||||
>>> step_per_epoch = 2
|
||||
>>> decay_epoch = 2
|
||||
>>> power = 0.5
|
||||
>>> r = polynomial_decay_lr(learning_rate, end_learning_rate, total_step, step_per_epoch, decay_epoch, power)
|
||||
>>> print(r)
|
||||
[0.1, 0.1, 0.07363961030678928, 0.07363961030678928, 0.01, 0.01]
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
mindspore.nn.warmup_lr(learning_rate, total_step, step_per_epoch, warmup_epoch)
|
||||
|
||||
预热学习率。
|
||||
|
||||
对于第i步,计算warmup_learning_rate[i]的公式为:
|
||||
|
||||
.. math::
|
||||
warmup\_learning\_rate[i] = learning\_rate * tmp\_epoch / warmup\_epoch
|
||||
|
||||
其中:math:`tmp\_epoch=min(current\_epoch, warmup\_epoch),\ current\_epoch=floor(\frac{i}{step\_per\_epoch})`
|
||||
|
||||
参数:
|
||||
learning_rate (float):学习率的初始值。
|
||||
total_step (int):step总数。
|
||||
step_per_epoch (int):每个epoch的step数。
|
||||
warmup_epoch (int):预热学习率的epoch数。
|
||||
|
||||
返回:
|
||||
list[float].`total_step`表示列表的大小。
|
||||
|
||||
示例:
|
||||
>>> learning_rate = 0.1
|
||||
>>> total_step = 6
|
||||
>>> step_per_epoch = 2
|
||||
>>> warmup_epoch = 2
|
||||
>>> output = warmup_lr(learning_rate, total_step, step_per_epoch, warmup_epoch)
|
||||
>>> print(output)
|
||||
[0.0, 0.0, 0.05, 0.05, 0.1, 0.1]
|
||||
|
|
@ -22,16 +22,17 @@ def piecewise_constant_lr(milestone, learning_rates):
|
|||
r"""
|
||||
Get piecewise constant learning rate.
|
||||
|
||||
Calculate learning rate by given `milestone` and `learning_rates`. Let the value of `milestone` be
|
||||
:math:`(M_1, M_2, ..., M_N)` and the value of `learning_rates` be :math:`(x_1, x_2, ..., x_N)`. N is the length of
|
||||
`milestone`. Let the output learning rate be `y`.
|
||||
Calculate learning rate by the given `milestone` and `learning_rates`. Let the value of `milestone` be
|
||||
:math:`(M_1, M_2, ..., M_t, ..., M_N)` and the value of `learning_rates` be :math:`(x_1, x_2, ..., x_t, ..., x_N)`.
|
||||
N is the length of `milestone`. Let the output learning rate be `y`, then for the i-th step, the formula of
|
||||
computing decayed_learning_rate[i] is:
|
||||
|
||||
.. math::
|
||||
y[i] = x_t,\ for\ i \in [M_{t-1}, M_t)
|
||||
|
||||
Args:
|
||||
milestone (Union[list[int], tuple[int]]): A list of milestone. This list is a monotone increasing list.
|
||||
Every element is a milestone step, and must be greater than 0.
|
||||
Every element in the list must be greater than 0.
|
||||
learning_rates (Union[list[float], tuple[float]]): A list of learning rates.
|
||||
|
||||
Returns:
|
||||
|
@ -215,7 +216,7 @@ def cosine_decay_lr(min_lr, max_lr, total_step, step_per_epoch, decay_epoch):
|
|||
For the i-th step, the formula of computing decayed_learning_rate[i] is:
|
||||
|
||||
.. math::
|
||||
decayed\_learning\_rate[i] = min\_learning\_rate + 0.5 * (max\_learning\_rate - min\_learning\_rate) *
|
||||
decayed\_learning\_rate[i] = min\_lr + 0.5 * (max\_lr - min\_lr) *
|
||||
(1 + cos(\frac{current\_epoch}{decay\_epoch}\pi))
|
||||
|
||||
Where :math:`current\_epoch=floor(\frac{i}{step\_per\_epoch})`.
|
||||
|
@ -344,7 +345,7 @@ def warmup_lr(learning_rate, total_step, step_per_epoch, warmup_epoch):
|
|||
For the i-th step, the formula of computing warmup_learning_rate[i] is:
|
||||
|
||||
.. math::
|
||||
warmup\_learning\_rate[i] = learning\_rate * tmp\_epoch / tmp\_warmup\_epoch
|
||||
warmup\_learning\_rate[i] = learning\_rate * tmp\_epoch / warmup\_epoch
|
||||
|
||||
Where :math:`tmp\_epoch=min(current\_epoch, warmup\_epoch),\ current\_epoch=floor(\frac{i}{step\_per\_epoch})`
|
||||
|
||||
|
|
|
@ -125,10 +125,10 @@ class NaturalExpDecayLR(LearningRateSchedule):
|
|||
r"""
|
||||
Calculates learning rate base on natural exponential decay function.
|
||||
|
||||
For the i-th step, the formula of computing decayed_learning_rate[i] is:
|
||||
For current step, the formula of computing decayed_learning_rate[current_step] is:
|
||||
|
||||
.. math::
|
||||
decayed\_learning\_rate[i] = learning\_rate * e^{-decay\_rate * p}
|
||||
decayed\_learning\_rate[current_step] = learning\_rate * e^{-decay\_rate * p}
|
||||
|
||||
Where :
|
||||
|
||||
|
@ -193,10 +193,10 @@ class InverseDecayLR(LearningRateSchedule):
|
|||
r"""
|
||||
Calculates learning rate base on inverse-time decay function.
|
||||
|
||||
For the i-th step, the formula of computing decayed_learning_rate[i] is:
|
||||
For current step, the formula of computing decayed_learning_rate[current_step] is:
|
||||
|
||||
.. math::
|
||||
decayed\_learning\_rate[i] = learning\_rate / (1 + decay\_rate * p)
|
||||
decayed\_learning\_rate[current\_step] = learning\_rate / (1 + decay\_rate * p)
|
||||
|
||||
Where :
|
||||
|
||||
|
@ -326,10 +326,10 @@ class PolynomialDecayLR(LearningRateSchedule):
|
|||
r"""
|
||||
Calculates learning rate base on polynomial decay function.
|
||||
|
||||
For the i-th step, the formula of computing decayed_learning_rate[i] is:
|
||||
For current step, the formula of computing decayed_learning_rate[current_step] is:
|
||||
|
||||
.. math::
|
||||
decayed\_learning\_rate[i] = (learning\_rate - end\_learning\_rate) *
|
||||
decayed\_learning\_rate[current\_step] = (learning\_rate - end\_learning\_rate) *
|
||||
(1 - tmp\_step / tmp\_decay\_steps)^{power} + end\_learning\_rate
|
||||
|
||||
Where :
|
||||
|
|
Loading…
Reference in New Issue