!37283 gumbel_softmax docs fix error
Merge pull request !37283 from TuDouNi/gumbelsoftmax2
This commit is contained in:
commit
7366848304
|
@ -8,7 +8,7 @@ mindspore.ops.gumbel_softmax
|
|||
**参数:**
|
||||
|
||||
- **logits** (Tensor) - 输入,是一个非标准化的对数概率分布。 只支持float16和float32。
|
||||
- **tau** (float) - 非负的标量温度。默认值:1.0。
|
||||
- **tau** (float) - 标量温度, 正数。默认值:1.0。
|
||||
- **hard** (bool) - 为True时返回one-hot离散型Tensor,可反向求导。默认值:False。
|
||||
- **dim** (int) - 给softmax使用的参数,在dim维上做softmax操作。默认值:-1。
|
||||
|
||||
|
|
|
@ -4082,7 +4082,7 @@ def gumbel_softmax(logits, tau=1, hard=False, dim=-1):
|
|||
|
||||
Args:
|
||||
logits (Tensor): Unnormalized log probabilities. The data type must be float16 or float32.
|
||||
tau (float): Non-negative scalar temperature. Default: 1.0.
|
||||
tau (float): The scalar temperature, which is a positive number. Default: 1.0.
|
||||
hard (bool): if `True`, the returned samples will be discretized as one-hot vectors, but will be differentiated
|
||||
as if it is the soft sample in autograd. Default: False.
|
||||
dim (int): Dim for softmax to compute. Default: -1.
|
||||
|
|
Loading…
Reference in New Issue