forked from mindspore-Ecosystem/mindspore
fix KLDivLoss docs
This commit is contained in:
parent
dd25d09add
commit
8832bb156f
|
@ -26,7 +26,7 @@ mindspore.nn.KLDivLoss
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
- 目前Ascend平台不支持数据类型float64。
|
- 目前Ascend平台不支持数据类型float64。
|
||||||
- 仅当 `reduction` 设置为"batchmean"时输出才与KL散度的数学定义一致。
|
- 仅当 `reduction` 设置为"batchmean"时输出才与Kullback-Leibler散度的数学定义一致。
|
||||||
|
|
||||||
参数:
|
参数:
|
||||||
- **reduction** (str) - 指定输出结果的计算方式。默认值: "mean"。
|
- **reduction** (str) - 指定输出结果的计算方式。默认值: "mean"。
|
||||||
|
|
|
@ -26,7 +26,7 @@ mindspore.ops.KLDivLoss
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
- 目前Ascend平台不支持数据类型float64。
|
- 目前Ascend平台不支持数据类型float64。
|
||||||
- 仅当 `reduction` 设置为"batchmean"时输出才与KL散度的数学定义一致。
|
- 仅当 `reduction` 设置为"batchmean"时输出才与Kullback-Leibler散度的数学定义一致。
|
||||||
|
|
||||||
参数:
|
参数:
|
||||||
- **reduction** (str) - 指定输出结果的计算方式。默认值: "mean"。
|
- **reduction** (str) - 指定输出结果的计算方式。默认值: "mean"。
|
||||||
|
|
|
@ -26,7 +26,7 @@ mindspore.ops.kl_div
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
- 目前Ascend平台不支持数据类型float64。
|
- 目前Ascend平台不支持数据类型float64。
|
||||||
- 仅当 `reduction` 设置为"batchmean"时输出才与KL散度的数学定义一致。
|
- 仅当 `reduction` 设置为"batchmean"时输出才与Kullback-Leibler散度的数学定义一致。
|
||||||
|
|
||||||
参数:
|
参数:
|
||||||
- **logits** (Tensor) - 数据类型支持float16、float32或float64。
|
- **logits** (Tensor) - 数据类型支持float16、float32或float64。
|
||||||
|
|
|
@ -2196,8 +2196,9 @@ class KLDivLoss(LossBase):
|
||||||
:math:`\ell(x, target)` represents `output`.
|
:math:`\ell(x, target)` represents `output`.
|
||||||
|
|
||||||
Note:
|
Note:
|
||||||
Currently it does not support float64 input on `Ascend`.
|
- Currently it does not support float64 input on `Ascend`.
|
||||||
It behaves the same as the mathematical definition only when `reduction` is set to `batchmean`.
|
- The output aligns with the mathematical definition of Kullback-Leibler divergence
|
||||||
|
only when `reduction` is set to 'batchmean'.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
reduction (str): Specifies the reduction to be applied to the output.
|
reduction (str): Specifies the reduction to be applied to the output.
|
||||||
|
|
|
@ -895,7 +895,8 @@ def kl_div(logits, labels, reduction='mean'):
|
||||||
|
|
||||||
Note:
|
Note:
|
||||||
- Currently it does not support float64 input on `Ascend`.
|
- Currently it does not support float64 input on `Ascend`.
|
||||||
- It behaves the same as the mathematical definition only when `reduction` is set to `batchmean`.
|
- The output aligns with the mathematical definition of Kullback-Leibler divergence
|
||||||
|
only when `reduction` is set to 'batchmean'.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
logits (Tensor): The input Tensor. The data type must be float16, float32 or float64.
|
logits (Tensor): The input Tensor. The data type must be float16, float32 or float64.
|
||||||
|
|
|
@ -5545,9 +5545,9 @@ class KLDivLoss(Primitive):
|
||||||
:math:`\ell(x, target)` represents `output`.
|
:math:`\ell(x, target)` represents `output`.
|
||||||
|
|
||||||
Note:
|
Note:
|
||||||
On Ascend, float64 dtype is not currently supported.
|
- On Ascend, float64 dtype is not currently supported.
|
||||||
The output aligns with the mathematical definition of KL divergence
|
- The output aligns with the mathematical definition of Kullback-Leibler divergence
|
||||||
only when `reduction` is set to 'batchmean'.
|
only when `reduction` is set to 'batchmean'.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
reduction (str): Specifies the reduction to be applied to the output.
|
reduction (str): Specifies the reduction to be applied to the output.
|
||||||
|
|
Loading…
Reference in New Issue