!48293 [docs][api] Update GaussianNllLoss document
Merge pull request !48293 from shaojunsong/code_docs_gnl
This commit is contained in:
commit
a81ec854c1
|
@ -12,7 +12,7 @@ mindspore.nn.GaussianNLLLoss
|
|||
\ \text{eps}\right)\right) + \frac{\left(\text{logits} - \text{labels}\right)^2}
|
||||
{\text{max}\left(\text{var}, \ \text{eps}\right)}\right) + \text{const.}
|
||||
|
||||
其中,:math:`eps` 用于 :math:`log` 的稳定性。在默认情况下,常数部分被忽略,除非 :math:`full=True`。如果 :math:`var` 和 :math:`logits` 的shape不一致(出于同方差性的假设),那么它必须最后一个维度是1,或者具有更少的维度(其他维度相同),来获得正确的广播。
|
||||
其中,:math:`eps` 用于 :math:`log` 的稳定性。当 :math:`full=True` 时,一个常数会被添加到loss中。如果 :math:`var` 和 :math:`logits` 的shape不一致(出于同方差性的假设),那么它们必须能够正确地广播。
|
||||
|
||||
参数:
|
||||
- **full** (bool) - 指定损失函数中的常数部分。如果为True,则常数为 :math:`const = 0.5*log(2*pi)`。默认值:False。
|
||||
|
|
|
@ -12,7 +12,7 @@ mindspore.ops.gaussian_nll_loss
|
|||
\ \text{eps}\right)\right) + \frac{\left(\text{x} - \text{target}\right)^2}
|
||||
{\text{max}\left(\text{var}, \ \text{eps}\right)}\right) + \text{const.}
|
||||
|
||||
其中,:math:`eps` 用于 :math:`log` 的稳定性。在默认情况下,常数部分被忽略,除非 :math:`full=True`。如果 :math:`var` 和 :math:`x` 的shape不一致(出于同方差性的假设),那么它必须最后一个维度是1,或者具有更少的维度(其他维度相同),来获得正确的广播。
|
||||
其中,:math:`eps` 用于 :math:`log` 的稳定性。当 :math:`full=True` 时,一个常数会被添加到loss中。如果 :math:`var` 和 :math:`logits` 的shape不一致(出于同方差性的假设),那么它们必须能够正确地广播。
|
||||
|
||||
参数:
|
||||
- **x** (Tensor) - shape为 :math:`(N, *)` 或 :math:`(*)`。`*` 代表着任意数量的额外维度。
|
||||
|
|
|
@ -2376,25 +2376,25 @@ class CTCLoss(LossBase):
|
|||
|
||||
|
||||
class GaussianNLLLoss(LossBase):
|
||||
r"""Gaussian negative log likelihood loss.
|
||||
r"""
|
||||
Gaussian negative log likelihood loss.
|
||||
|
||||
The targets are treated as samples from Gaussian distributions with expectations and variances predicted by the
|
||||
neural network. For a `labels` tensor modelled as having Gaussian distribution with a tensor of expectations
|
||||
`logits` and a tensor of positive variances `var` the loss is:
|
||||
The target values are considered to be samples from a Gaussian distribution, where the expectation and variance are
|
||||
predicted by a neural network. For `labels` modeled on a Gaussian distribution, `logits` to record expectations,
|
||||
and the variance `var` (elements are all positive), the calculated loss is:
|
||||
|
||||
.. math::
|
||||
\text{loss} = \frac{1}{2}\left(\log\left(\text{max}\left(\text{var},
|
||||
\ \text{eps}\right)\right) + \frac{\left(\text{logits} - \text{labels}\right)^2}
|
||||
{\text{max}\left(\text{var}, \ \text{eps}\right)}\right) + \text{const.}
|
||||
|
||||
where `eps` is used for stability of :math:`log`. By default, the constant term of the loss function is omitted
|
||||
unless :math:`full=True`. If the shape of :math:`var` is not the same as `logits` (due to a
|
||||
homoscedastic assumption), it must either have a final dimension of 1 or have one fewer dimension
|
||||
(with all other sizes being the same) for correct broadcasting.
|
||||
where `eps` is used for stability of :math:`log`. When :math:`full=True`, a constant will be added to the loss. If
|
||||
the shape of :math:`var` and `logits` are not the same (due to a homoscedastic assumption), their shapes must allow
|
||||
correct broadcasting.
|
||||
|
||||
Args:
|
||||
full (bool): Include the constant term in the loss calculation. When :math:`full=True`, the constant term
|
||||
`const.` will be :math:`0.5 * log(2\pi)`. Default: False.
|
||||
full (bool): Whether include the constant term in the loss calculation. When :math:`full=True`,
|
||||
the constant term `const.` will be :math:`0.5 * log(2\pi)`. Default: False.
|
||||
eps (float): Used to improve the stability of log function. Default: 1e-6.
|
||||
reduction (str): Apply specific reduction method to the output: 'none', 'mean', or 'sum'. Default: 'mean'.
|
||||
|
||||
|
|
|
@ -3975,19 +3975,18 @@ def gaussian_nll_loss(x, target, var, full=False, eps=1e-6, reduction='mean'):
|
|||
r"""
|
||||
Gaussian negative log likelihood loss.
|
||||
|
||||
The targets are treated as samples from Gaussian distributions with expectations and variances predicted by the
|
||||
neural network. For a `target` tensor modelled as having Gaussian distribution with a tensor of expectations
|
||||
`x` and a tensor of positive variances `var` the loss is:
|
||||
The target values are considered to be samples from a Gaussian distribution, where the expectation and variance are
|
||||
predicted by a neural network. For `labels` modeled on a Gaussian distribution, `logits` to record expectations,
|
||||
and the variance `var` (elements are all positive), the calculated loss is:
|
||||
|
||||
.. math::
|
||||
\text{loss} = \frac{1}{2}\left(\log\left(\text{max}\left(\text{var},
|
||||
\ \text{eps}\right)\right) + \frac{\left(\text{x} - \text{target}\right)^2}
|
||||
{\text{max}\left(\text{var}, \ \text{eps}\right)}\right) + \text{const.}
|
||||
|
||||
where `eps` is used for stability of :math:`log`. By default, the constant term of the loss function is omitted
|
||||
unless :math:`full=True`. If the shape of :math:`var` is not the same as `x` (due to a
|
||||
homoscedastic assumption), it must either have a final dimension of 1 or have one fewer dimension
|
||||
(with all other sizes being the same) for correct broadcasting.
|
||||
where `eps` is used for stability of :math:`log`. When :math:`full=True`, a constant will be added to the loss. If
|
||||
the shape of :math:`var` and `logits` are not the same (due to a homoscedastic assumption), their shapes must allow
|
||||
correct broadcasting.
|
||||
|
||||
Args:
|
||||
x (Tensor): Tensor of shape :math:`(N, *)` or :math:`(*)` where :math:`*` means any number of
|
||||
|
|
Loading…
Reference in New Issue