!35146 update doc of nn.HuberLoss

Merge pull request !35146 from fujianzhao/code_docs
This commit is contained in:
i-robot 2022-05-30 08:15:22 +00:00 committed by Gitee
commit 940be5d9a9
No known key found for this signature in database
GPG Key ID: 173E9B9CA92EEF8F
2 changed files with 20 additions and 16 deletions

View File

@ -1,7 +1,7 @@
mindspore.nn.HuberLoss
=============================
.. py:class:: mindspore.nn.HuberLoss(reduction='mean', delta=1.0)
.. py:class:: mindspore.nn.HuberLoss(reduction="mean", delta=1.0)
HuberLoss计算预测值和目标值之间的误差。它兼有L1Loss和MSELoss的优点。
@ -23,13 +23,13 @@ mindspore.nn.HuberLoss
.. math::
\ell(x, y) =
\begin{cases}
\operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\
\operatorname{sum}(L), & \text{if reduction} = \text{'sum'.}
\operatorname{mean}(L), & \text{if reduction} = \text{"mean";}\\
\operatorname{sum}(L), & \text{if reduction} = \text{"sum".}
\end{cases}
**参数:**
- **reduction** (str) - 应用于loss的reduction类型。取值为"mean""sum",或"none"。默认值:"mean"。如果 `reduction`'mean'或'sum'则输出一个标量Tensor如果 `reduction` 为'none'则输出Tensor的shape为广播后的shape。
- **reduction** (str) - 应用于loss的reduction类型。取值为"mean""sum",或"none"。默认值:"mean"。如果 `reduction`"mean"或"sum"则输出一个标量Tensor如果 `reduction` 为"none"则输出Tensor的shape为广播后的shape。
- **delta** (Union[int, float]) - 两种损失之间变化的阈值。 该值必须为正。 默认值1.0。
**输入:**
@ -39,11 +39,12 @@ mindspore.nn.HuberLoss
**输出:**
Tensor或Scalar如果 `reduction`'none'则为shape和数据类型与输入'logits'相的Tensor。否则输出为Scalar。
Tensor或Scalar如果 `reduction`"none",返回与 `logits` 具有相同shape和dtype的Tensor。否则将返回一个Scalar。
**异常:**
- **TypeError** - `logits``labels` 的数据类型既不是float16也不是float32。
- **TypeError** - `logits``labels` 的数据类型不同。
- **TypeError** - `delta` 不是float或int。
- **ValueError** - `delta` 的值小于或等于0。
- **ValueError** - `reduction` 不为"mean"、"sum"或"none"。

View File

@ -1504,13 +1504,13 @@ class HuberLoss(LossBase):
delta * (|x_n - y_n| - 0.5 * delta), & \text{otherwise. }
\end{cases}
where :math:`N` is the batch size. If `reduction` is not 'none', then:
where :math:`N` is the batch size. If `reduction` is not "none", then:
.. math::
\ell(x, y) =
\begin{cases}
\operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\
\operatorname{sum}(L), & \text{if reduction} = \text{'sum'.}
\operatorname{mean}(L), & \text{if reduction} = \text{"mean";}\\
\operatorname{sum}(L), & \text{if reduction} = \text{"sum".}
\end{cases}
Args:
@ -1521,27 +1521,30 @@ class HuberLoss(LossBase):
The value must be positive. Default: 1.0.
Inputs:
- **logits** (Tensor) - Input logits with shape :math:`(N, *)` where :math:`*` means, any number
of additional dimensions. The data type must be float16 or float32.
- **labels** (Tensor) - Ground truth label with shape :math:`(N, *)`, same dtype as `logits`.
It supports the shape of `logits` is different from the shape of `labels` and they should be
broadcasted to each other.
- **logits** (Tensor) - Predicted value, Tensor of any dimension. The data type must be float16 or float32.
- **labels** (Tensor) - Target value, same dtype and shape as the `logits` in common cases.
However, it supports the shape of `logits` is different from the shape of `labels`
and they should be broadcasted to each other.
Outputs:
Tensor or Scalar, if `reduction` is "none", its shape is the same as `logits`.
Tensor or Scalar, if `reduction` is "none", return a Tensor with same shape and dtype as `logits`.
Otherwise, a scalar value will be returned.
Raises:
TypeError: If data type of `logits` or `labels` is neither float16 nor float32.
TypeError: If data type of `logits` or `labels` are not the same.
TypeError: If dtype of `delta` is neither float nor int.
ValueError: If `delta` is less than or equal to 0.
ValueError: If `reduction` is not one of 'none', 'mean', 'sum'.
ValueError: If `reduction` is not one of "none", "mean", "sum".
ValueError: If `logits` and `labels` have different shapes and cannot be broadcasted to each other.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> import mindspore
>>> from mindspore import Tensor, nn
>>> import numpy as np
>>> # Case 1: logits.shape = labels.shape = (3,)
>>> loss = nn.HuberLoss()
>>> logits = Tensor(np.array([1, 2, 3]), mindspore.float32)
@ -1550,7 +1553,7 @@ class HuberLoss(LossBase):
>>> print(output)
0.16666667
>>> # Case 2: logits.shape = (3,), labels.shape = (2, 3)
>>> loss = nn.HuberLoss(reduction='none')
>>> loss = nn.HuberLoss(reduction="none")
>>> logits = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> labels = Tensor(np.array([[1, 1, 1], [1, 2, 2]]), mindspore.float32)
>>> output = loss(logits, labels)