forked from mindspore-Ecosystem/mindspore
!49383 [BUG] FIx bce_with_logits doc bug
Merge pull request !49383 from douzhixing/doc
This commit is contained in:
commit
84c6660817
|
@ -9,7 +9,8 @@ mindspore.ops.BCEWithLogitsLoss
|
|||
|
||||
.. math::
|
||||
\begin{array}{ll} \\
|
||||
L_{ij} = -W_{ij}[Y_{ij}log(X_{ij}) + (1 - Y_{ij})log(1 - X_{ij})]
|
||||
p_{ij} = sigmoid(X_{ij}) = \frac{1}{1 + e^{-X_{ij}}} \\
|
||||
L_{ij} = -[Y_{ij}log(p_{ij}) + (1 - Y_{ij})log(1 - p_{ij})]
|
||||
\end{array}
|
||||
|
||||
:math:`i` 表示 :math:`i^{th}` 样例, :math:`j` 表示类别。则,
|
||||
|
@ -23,6 +24,22 @@ mindspore.ops.BCEWithLogitsLoss
|
|||
|
||||
:math:`\ell` 表示计算损失的方法。有三种方法:第一种方法是直接提供损失值,第二种方法是计算所有损失的平均值,第三种方法是计算所有损失的总和。
|
||||
|
||||
该算子会将输出乘以相应的权重。
|
||||
:math:`weight` 表示一个batch中的每条数据分配不同的权重,
|
||||
:math:`pos_weight` 为每个类别的正例子添加相应的权重。
|
||||
|
||||
此外,它可以通过向正例添加权重来权衡召回率和精度。
|
||||
在多标签分类的情况下,损失可以描述为:
|
||||
|
||||
.. math::
|
||||
\begin{array}{ll} \\
|
||||
p_{ij,c} = sigmoid(X_{ij,c}) = \frac{1}{1 + e^{-X_{ij,c}}} \\
|
||||
L_{ij,c} = -[P_{c}Y_{ij,c} * log(p_{ij,c}) + (1 - Y_{ij,c})log(1 - p_{ij,c})]
|
||||
\end{array}
|
||||
|
||||
其中 c 是类别数目(C>1 表示多标签二元分类,c=1 表示单标签二元分类),n 是批次中样本的数量,:math:`P_c` 是 第c类正例的权重。
|
||||
:math:`P_c>1` 增大召回率, :math:`P_c<1` 增大精度。
|
||||
|
||||
参数:
|
||||
- **reduction** (str) - 指定用于输出结果的计算方式。取值为 'mean' 、 'sum' 或 'none' ,不区分大小写。如果 'none' ,则不执行 `reduction` 。默认值:'mean' 。
|
||||
|
||||
|
|
|
@ -9,7 +9,8 @@ mindspore.ops.binary_cross_entropy_with_logits
|
|||
|
||||
.. math::
|
||||
\begin{array}{ll} \\
|
||||
L_{ij} = -W_{ij}[Y_{ij}log(X_{ij}) + (1 - Y_{ij})log(1 - X_{ij})]
|
||||
p_{ij} = sigmoid(X_{ij}) = \frac{1}{1 + e^{-X_{ij}}} \\
|
||||
L_{ij} = -[Y_{ij}log(p_{ij}) + (1 - Y_{ij})log(1 - p_{ij})]
|
||||
\end{array}
|
||||
|
||||
:math:`i` 表示 :math:`i^{th}` 样例, :math:`j` 表示类别。则,
|
||||
|
@ -23,6 +24,22 @@ mindspore.ops.binary_cross_entropy_with_logits
|
|||
|
||||
:math:`\ell` 表示计算损失的方法。有三种方法:第一种方法是直接提供损失值,第二种方法是计算所有损失的平均值,第三种方法是计算所有损失的总和。
|
||||
|
||||
该算子会将输出乘以相应的权重。
|
||||
:math:`weight` 表示一个batch中的每条数据分配不同的权重,
|
||||
:math:`pos_weight` 为每个类别的正例子添加相应的权重。
|
||||
|
||||
此外,它可以通过向正例添加权重来权衡召回率和精度。
|
||||
在多标签分类的情况下,损失可以描述为:
|
||||
|
||||
.. math::
|
||||
\begin{array}{ll} \\
|
||||
p_{ij,c} = sigmoid(X_{ij,c}) = \frac{1}{1 + e^{-X_{ij,c}}} \\
|
||||
L_{ij,c} = -[P_{c}Y_{ij,c} * log(p_{ij,c}) + (1 - Y_{ij,c})log(1 - p_{ij,c})]
|
||||
\end{array}
|
||||
|
||||
其中 c 是类别数目(C>1 表示多标签二元分类,c=1 表示单标签二元分类),n 是批次中样本的数量,:math:`P_c` 是 第c类正例的权重。
|
||||
:math:`P_c>1` 增大召回率, :math:`P_c<1` 增大精度。
|
||||
|
||||
参数:
|
||||
- **logits** (Tensor) - 输入预测值,任意维度的Tensor。其数据类型为float16或float32。
|
||||
- **label** (Tensor) - 输入目标值,shape与 `logits` 相同。数据类型为float16或float32。
|
||||
|
|
|
@ -640,7 +640,7 @@ class SoftMarginLoss(LossBase):
|
|||
ValueError: If `reduction` is not one of 'none', 'mean', 'sum'.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend``
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
Examples:
|
||||
>>> loss = nn.SoftMarginLoss()
|
||||
|
|
|
@ -1084,7 +1084,7 @@ def binary_cross_entropy_with_logits(logits, label, weight, pos_weight, reductio
|
|||
|
||||
\begin{array}{ll} \\
|
||||
p_{ij} = sigmoid(X_{ij}) = \frac{1}{1 + e^{-X_{ij}}} \\
|
||||
L_{ij} = -[Y_{ij} * log(p_{ij}) + (1 - Y_{ij})log(1 - p_{ij})]
|
||||
L_{ij} = -[Y_{ij}log(p_{ij}) + (1 - Y_{ij})log(1 - p_{ij})]
|
||||
\end{array}
|
||||
|
||||
:math:`i` indicates the :math:`i^{th}` sample, :math:`j` indicates the category. Then,
|
||||
|
@ -1115,8 +1115,8 @@ def binary_cross_entropy_with_logits(logits, label, weight, pos_weight, reductio
|
|||
\end{array}
|
||||
|
||||
where c is the class number (c>1 for multi-label binary classification, c=1 for single-label binary classification),
|
||||
n is the number of the sample in the batch and :math:`p_c` is the weight of the positive answer for the class c.
|
||||
:math:`p_c>1` increases the recall, :math:`p_c<1` increases the precision.
|
||||
n is the number of the sample in the batch and :math:`P_c` is the weight of the positive answer for the class c.
|
||||
:math:`P_c>1` increases the recall, :math:`P_c<1` increases the precision.
|
||||
|
||||
Args:
|
||||
logits (Tensor): Input logits. Data type must be float16 or float32.
|
||||
|
|
|
@ -3013,7 +3013,7 @@ class SoftMarginLoss(Primitive):
|
|||
ValueError: If `reduction` is not one of 'none', 'mean' or 'sum'.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend``
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
Examples:
|
||||
>>> loss = ops.SoftMarginLoss()
|
||||
|
@ -4116,7 +4116,7 @@ class BCEWithLogitsLoss(PrimitiveWithInfer):
|
|||
|
||||
\begin{array}{ll} \\
|
||||
p_{ij} = sigmoid(X_{ij}) = \frac{1}{1 + e^{-X_{ij}}} \\
|
||||
L_{ij} = -[Y_{ij} * log(p_{ij}) + (1 - Y_{ij})log(1 - p_{ij})]
|
||||
L_{ij} = -[Y_{ij}log(p_{ij}) + (1 - Y_{ij})log(1 - p_{ij})]
|
||||
\end{array}
|
||||
|
||||
:math:`i` indicates the :math:`i^{th}` sample, :math:`j` indicates the category. Then,
|
||||
|
@ -4147,8 +4147,8 @@ class BCEWithLogitsLoss(PrimitiveWithInfer):
|
|||
\end{array}
|
||||
|
||||
where c is the class number (c>1 for multi-label binary classification, c=1 for single-label binary classification),
|
||||
n is the number of the sample in the batch and :math:`p_c` is the weight of the positive answer for the class c.
|
||||
:math:`p_c>1` increases the recall, :math:`p_c<1` increases the precision.
|
||||
n is the number of the sample in the batch and :math:`P_c` is the weight of the positive answer for the class c.
|
||||
:math:`P_c>1` increases the recall, :math:`P_c<1` increases the precision.
|
||||
|
||||
Args:
|
||||
reduction (str): Type of reduction to be applied to loss. The optional values are 'mean', 'sum', and 'none',
|
||||
|
|
Loading…
Reference in New Issue