fix KLDivLoss docs

This commit is contained in:
fanjibin 2022-10-26 21:21:18 +08:00
parent dd25d09add
commit 8832bb156f
6 changed files with 11 additions and 9 deletions

View File

@ -26,7 +26,7 @@ mindspore.nn.KLDivLoss
.. note::
- 目前Ascend平台不支持数据类型float64。
- 仅当 `reduction` 设置为"batchmean"时输出才与KL散度的数学定义一致。
- 仅当 `reduction` 设置为"batchmean"时输出才与Kullback-Leibler散度的数学定义一致。
参数:
- **reduction** (str) - 指定输出结果的计算方式。默认值: "mean"。

View File

@ -26,7 +26,7 @@ mindspore.ops.KLDivLoss
.. note::
- 目前Ascend平台不支持数据类型float64。
- 仅当 `reduction` 设置为"batchmean"时输出才与KL散度的数学定义一致。
- 仅当 `reduction` 设置为"batchmean"时输出才与Kullback-Leibler散度的数学定义一致。
参数:
- **reduction** (str) - 指定输出结果的计算方式。默认值: "mean"。

View File

@ -26,7 +26,7 @@ mindspore.ops.kl_div
.. note::
- 目前Ascend平台不支持数据类型float64。
- 仅当 `reduction` 设置为"batchmean"时输出才与KL散度的数学定义一致。
- 仅当 `reduction` 设置为"batchmean"时输出才与Kullback-Leibler散度的数学定义一致。
参数:
- **logits** (Tensor) - 数据类型支持float16、float32或float64。

View File

@ -2196,8 +2196,9 @@ class KLDivLoss(LossBase):
:math:`\ell(x, target)` represents `output`.
Note:
Currently it does not support float64 input on `Ascend`.
It behaves the same as the mathematical definition only when `reduction` is set to `batchmean`.
- Currently it does not support float64 input on `Ascend`.
- The output aligns with the mathematical definition of Kullback-Leibler divergence
only when `reduction` is set to 'batchmean'.
Args:
reduction (str): Specifies the reduction to be applied to the output.

View File

@ -895,7 +895,8 @@ def kl_div(logits, labels, reduction='mean'):
Note:
- Currently it does not support float64 input on `Ascend`.
- It behaves the same as the mathematical definition only when `reduction` is set to `batchmean`.
- The output aligns with the mathematical definition of Kullback-Leibler divergence
only when `reduction` is set to 'batchmean'.
Args:
logits (Tensor): The input Tensor. The data type must be float16, float32 or float64.

View File

@ -5545,9 +5545,9 @@ class KLDivLoss(Primitive):
:math:`\ell(x, target)` represents `output`.
Note:
On Ascend, float64 dtype is not currently supported.
The output aligns with the mathematical definition of KL divergence
only when `reduction` is set to 'batchmean'.
- On Ascend, float64 dtype is not currently supported.
- The output aligns with the mathematical definition of Kullback-Leibler divergence
only when `reduction` is set to 'batchmean'.
Args:
reduction (str): Specifies the reduction to be applied to the output.