!40586 modify the wrong format of the files
Merge pull request !40586 from 宦晓玲/code_docs_0818
This commit is contained in:
commit
4a00560a8a
|
@ -24,3 +24,6 @@ mindspore.nn.DynamicLossScaleUpdateCell
|
||||||
.. py:method:: get_loss_scale()
|
.. py:method:: get_loss_scale()
|
||||||
|
|
||||||
获取当前损失缩放系数。
|
获取当前损失缩放系数。
|
||||||
|
|
||||||
|
返回:
|
||||||
|
float,损失缩放系数。
|
|
@ -20,3 +20,6 @@ mindspore.nn.FixedLossScaleUpdateCell
|
||||||
.. py:method:: get_loss_scale()
|
.. py:method:: get_loss_scale()
|
||||||
|
|
||||||
获取当前损失缩放系数。
|
获取当前损失缩放系数。
|
||||||
|
|
||||||
|
返回:
|
||||||
|
float,损失缩放系数。
|
|
@ -677,14 +677,14 @@ class Parameter(Tensor_):
|
||||||
set_sliced (bool): True if the parameter is set sliced after initializing the data.
|
set_sliced (bool): True if the parameter is set sliced after initializing the data.
|
||||||
Default: False.
|
Default: False.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Parameter, the `Parameter` after initializing data. If current `Parameter` was already initialized before,
|
||||||
|
returns the same initialized `Parameter`.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
RuntimeError: If it is from Initializer, and parallel mode has changed after the Initializer created.
|
RuntimeError: If it is from Initializer, and parallel mode has changed after the Initializer created.
|
||||||
ValueError: If the length of the layout is less than 6.
|
ValueError: If the length of the layout is less than 6.
|
||||||
TypeError: If `layout` is not tuple.
|
TypeError: If `layout` is not tuple.
|
||||||
|
|
||||||
Returns:
|
|
||||||
Parameter, the `Parameter` after initializing data. If current `Parameter` was already initialized before,
|
|
||||||
returns the same initialized `Parameter`.
|
|
||||||
"""
|
"""
|
||||||
if self.is_default_input_init and self.is_in_parallel != _is_in_parallel_mode():
|
if self.is_default_input_init and self.is_in_parallel != _is_in_parallel_mode():
|
||||||
raise RuntimeError("Must set or change parallel mode before any Tensor created.")
|
raise RuntimeError("Must set or change parallel mode before any Tensor created.")
|
||||||
|
|
|
@ -1101,6 +1101,9 @@ class Dataset:
|
||||||
Shuffling the dataset may not be deterministic, which means the data in each split
|
Shuffling the dataset may not be deterministic, which means the data in each split
|
||||||
will be different in each epoch.
|
will be different in each epoch.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
tuple(Dataset), a tuple of datasets that have been split.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
RuntimeError: If get_dataset_size returns None or is not supported for this dataset.
|
RuntimeError: If get_dataset_size returns None or is not supported for this dataset.
|
||||||
RuntimeError: If `sizes` is list of integers and sum of all elements in sizes does not
|
RuntimeError: If `sizes` is list of integers and sum of all elements in sizes does not
|
||||||
|
@ -1110,9 +1113,6 @@ class Dataset:
|
||||||
ValueError: If `sizes` is list of float and not all floats are between 0 and 1, or if the
|
ValueError: If `sizes` is list of float and not all floats are between 0 and 1, or if the
|
||||||
floats don't sum to 1.
|
floats don't sum to 1.
|
||||||
|
|
||||||
Returns:
|
|
||||||
tuple(Dataset), a tuple of datasets that have been split.
|
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> # TextFileDataset is not a mappable dataset, so this non-optimized split will be called.
|
>>> # TextFileDataset is not a mappable dataset, so this non-optimized split will be called.
|
||||||
>>> # Since many datasets have shuffle on by default, set shuffle to False if split will be called!
|
>>> # Since many datasets have shuffle on by default, set shuffle to False if split will be called!
|
||||||
|
@ -2288,6 +2288,9 @@ class MappableDataset(SourceDataset):
|
||||||
will be different in each epoch. Furthermore, if sharding occurs after split, each
|
will be different in each epoch. Furthermore, if sharding occurs after split, each
|
||||||
shard may not be part of the same split.
|
shard may not be part of the same split.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
tuple(Dataset), a tuple of datasets that have been split.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
RuntimeError: If get_dataset_size returns None or is not supported for this dataset.
|
RuntimeError: If get_dataset_size returns None or is not supported for this dataset.
|
||||||
RuntimeError: If `sizes` is list of integers and sum of all elements in sizes does not
|
RuntimeError: If `sizes` is list of integers and sum of all elements in sizes does not
|
||||||
|
@ -2297,9 +2300,6 @@ class MappableDataset(SourceDataset):
|
||||||
ValueError: If `sizes` is list of float and not all floats are between 0 and 1, or if the
|
ValueError: If `sizes` is list of float and not all floats are between 0 and 1, or if the
|
||||||
floats don't sum to 1.
|
floats don't sum to 1.
|
||||||
|
|
||||||
Returns:
|
|
||||||
tuple(Dataset), a tuple of datasets that have been split.
|
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> # Since many datasets have shuffle on by default, set shuffle to False if split will be called!
|
>>> # Since many datasets have shuffle on by default, set shuffle to False if split will be called!
|
||||||
>>> dataset = ds.ImageFolderDataset(image_folder_dataset_dir, shuffle=False)
|
>>> dataset = ds.ImageFolderDataset(image_folder_dataset_dir, shuffle=False)
|
||||||
|
|
|
@ -914,8 +914,9 @@ class Cell(Cell_):
|
||||||
Returns the dynamic_inputs of a cell object in one network.
|
Returns the dynamic_inputs of a cell object in one network.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
inputs (tuple): Inputs of the Cell object.
|
inputs (tuple), Inputs of the Cell object.
|
||||||
NOTE:
|
|
||||||
|
Note:
|
||||||
This is an experimental interface that is subject to change or deletion.
|
This is an experimental interface that is subject to change or deletion.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -2006,7 +2007,7 @@ class Cell(Cell_):
|
||||||
def set_comm_fusion(self, fusion_type, recurse=True):
|
def set_comm_fusion(self, fusion_type, recurse=True):
|
||||||
"""
|
"""
|
||||||
Set `comm_fusion` for all the parameters in this cell. Please refer to the description of
|
Set `comm_fusion` for all the parameters in this cell. Please refer to the description of
|
||||||
:class:`mindspore.Parameter.comm_fusion`.
|
:class:`mindspore.Parameter.comm_fusion`.
|
||||||
|
|
||||||
Note:
|
Note:
|
||||||
The value of attribute will be overwritten when the function is called multiply.
|
The value of attribute will be overwritten when the function is called multiply.
|
||||||
|
|
|
@ -358,15 +358,15 @@ def polynomial_decay_lr(learning_rate, end_learning_rate, total_step, step_per_e
|
||||||
power (float): The power of polynomial. It must be greater than 0.
|
power (float): The power of polynomial. It must be greater than 0.
|
||||||
update_decay_epoch (bool): If true, update `decay_epoch`. Default: False.
|
update_decay_epoch (bool): If true, update `decay_epoch`. Default: False.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
list[float]. The size of list is `total_step`.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
TypeError: If `learning_rate` or `end_learning_rate` or `power` is not a float.
|
TypeError: If `learning_rate` or `end_learning_rate` or `power` is not a float.
|
||||||
TypeError: If `total_step` or `step_per_epoch` or `decay_epoch` is not an int.
|
TypeError: If `total_step` or `step_per_epoch` or `decay_epoch` is not an int.
|
||||||
TypeError: If `update_decay_epoch` is not a bool.
|
TypeError: If `update_decay_epoch` is not a bool.
|
||||||
ValueError: If `learning_rate` or `power` is not greater than 0.
|
ValueError: If `learning_rate` or `power` is not greater than 0.
|
||||||
|
|
||||||
Returns:
|
|
||||||
list[float]. The size of list is `total_step`.
|
|
||||||
|
|
||||||
Supported Platforms:
|
Supported Platforms:
|
||||||
``Ascend`` ``GPU`` ``CPU``
|
``Ascend`` ``GPU`` ``CPU``
|
||||||
|
|
||||||
|
|
|
@ -1319,13 +1319,13 @@ class HShrink(Cell):
|
||||||
Outputs:
|
Outputs:
|
||||||
Tensor, the same shape and data type as the input.
|
Tensor, the same shape and data type as the input.
|
||||||
|
|
||||||
Supported Platforms:
|
|
||||||
``Ascend`` ``CPU`` ``GPU``
|
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
TypeError: If `lambd` is not a float.
|
TypeError: If `lambd` is not a float.
|
||||||
TypeError: If dtype of `input_x` is neither float16 nor float32.
|
TypeError: If dtype of `input_x` is neither float16 nor float32.
|
||||||
|
|
||||||
|
Supported Platforms:
|
||||||
|
``Ascend`` ``CPU`` ``GPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> import mindspore
|
>>> import mindspore
|
||||||
>>> from mindspore import Tensor, nn
|
>>> from mindspore import Tensor, nn
|
||||||
|
@ -1368,13 +1368,13 @@ class Threshold(Cell):
|
||||||
Outputs:
|
Outputs:
|
||||||
Tensor, the same shape and data type as the input.
|
Tensor, the same shape and data type as the input.
|
||||||
|
|
||||||
Supported Platforms:
|
|
||||||
``Ascend`` ``CPU`` ``GPU``
|
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
TypeError: If `threshold` is not a float or an int.
|
TypeError: If `threshold` is not a float or an int.
|
||||||
TypeError: If `value` is not a float or an int.
|
TypeError: If `value` is not a float or an int.
|
||||||
|
|
||||||
|
Supported Platforms:
|
||||||
|
``Ascend`` ``CPU`` ``GPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> import mindspore
|
>>> import mindspore
|
||||||
>>> import mindspore.nn as nn
|
>>> import mindspore.nn as nn
|
||||||
|
|
|
@ -344,15 +344,15 @@ class BatchNorm1d(_BatchNorm):
|
||||||
Outputs:
|
Outputs:
|
||||||
Tensor, the normalized, scaled, offset tensor, of shape :math:`(N, C_{out})`.
|
Tensor, the normalized, scaled, offset tensor, of shape :math:`(N, C_{out})`.
|
||||||
|
|
||||||
Supported Platforms:
|
|
||||||
``Ascend`` ``GPU`` ``CPU``
|
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
TypeError: If `num_features` is not an int.
|
TypeError: If `num_features` is not an int.
|
||||||
TypeError: If `eps` is not a float.
|
TypeError: If `eps` is not a float.
|
||||||
ValueError: If `num_features` is less than 1.
|
ValueError: If `num_features` is less than 1.
|
||||||
ValueError: If `momentum` is not in range [0, 1].
|
ValueError: If `momentum` is not in range [0, 1].
|
||||||
|
|
||||||
|
Supported Platforms:
|
||||||
|
``Ascend`` ``GPU`` ``CPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> import numpy as np
|
>>> import numpy as np
|
||||||
>>> import mindspore.nn as nn
|
>>> import mindspore.nn as nn
|
||||||
|
@ -977,9 +977,6 @@ class InstanceNorm1d(_InstanceNorm):
|
||||||
Tensor, the normalized, scaled, offset tensor, of shape :math:`(N, C, L)`. Same type and
|
Tensor, the normalized, scaled, offset tensor, of shape :math:`(N, C, L)`. Same type and
|
||||||
shape as the `x`.
|
shape as the `x`.
|
||||||
|
|
||||||
Supported Platforms:
|
|
||||||
``GPU``
|
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
TypeError: If the type of `num_features` is not int.
|
TypeError: If the type of `num_features` is not int.
|
||||||
TypeError: If the type of `eps` is not float.
|
TypeError: If the type of `eps` is not float.
|
||||||
|
@ -993,6 +990,9 @@ class InstanceNorm1d(_InstanceNorm):
|
||||||
KeyError: If any of `gamma_init`/`beta_init` is str and the homonymous class inheriting from `Initializer` not
|
KeyError: If any of `gamma_init`/`beta_init` is str and the homonymous class inheriting from `Initializer` not
|
||||||
exists.
|
exists.
|
||||||
|
|
||||||
|
Supported Platforms:
|
||||||
|
``GPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> import mindspore
|
>>> import mindspore
|
||||||
>>> import numpy as np
|
>>> import numpy as np
|
||||||
|
@ -1067,9 +1067,6 @@ class InstanceNorm2d(_InstanceNorm):
|
||||||
Tensor, the normalized, scaled, offset tensor, of shape :math:`(N, C, H, W)`. Same type and
|
Tensor, the normalized, scaled, offset tensor, of shape :math:`(N, C, H, W)`. Same type and
|
||||||
shape as the `x`.
|
shape as the `x`.
|
||||||
|
|
||||||
Supported Platforms:
|
|
||||||
``GPU``
|
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
TypeError: If the type of `num_features` is not int.
|
TypeError: If the type of `num_features` is not int.
|
||||||
TypeError: If the type of `eps` is not float.
|
TypeError: If the type of `eps` is not float.
|
||||||
|
@ -1083,6 +1080,9 @@ class InstanceNorm2d(_InstanceNorm):
|
||||||
KeyError: If any of `gamma_init`/`beta_init` is str and the homonymous class inheriting from `Initializer` not
|
KeyError: If any of `gamma_init`/`beta_init` is str and the homonymous class inheriting from `Initializer` not
|
||||||
exists.
|
exists.
|
||||||
|
|
||||||
|
Supported Platforms:
|
||||||
|
``GPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> import mindspore
|
>>> import mindspore
|
||||||
>>> import numpy as np
|
>>> import numpy as np
|
||||||
|
@ -1157,9 +1157,6 @@ class InstanceNorm3d(_InstanceNorm):
|
||||||
Tensor, the normalized, scaled, offset tensor, of shape :math:`(N, C, D, H, W)`. Same type and
|
Tensor, the normalized, scaled, offset tensor, of shape :math:`(N, C, D, H, W)`. Same type and
|
||||||
shape as the `x`.
|
shape as the `x`.
|
||||||
|
|
||||||
Supported Platforms:
|
|
||||||
``GPU``
|
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
TypeError: If the type of `num_features` is not int.
|
TypeError: If the type of `num_features` is not int.
|
||||||
TypeError: If the type of `eps` is not float.
|
TypeError: If the type of `eps` is not float.
|
||||||
|
@ -1173,6 +1170,9 @@ class InstanceNorm3d(_InstanceNorm):
|
||||||
KeyError: If any of `gamma_init`/`beta_init` is str and the homonymous class inheriting from `Initializer` not
|
KeyError: If any of `gamma_init`/`beta_init` is str and the homonymous class inheriting from `Initializer` not
|
||||||
exists.
|
exists.
|
||||||
|
|
||||||
|
Supported Platforms:
|
||||||
|
``GPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> import mindspore
|
>>> import mindspore
|
||||||
>>> import numpy as np
|
>>> import numpy as np
|
||||||
|
|
|
@ -87,12 +87,12 @@ class TimeDistributed(Cell):
|
||||||
Outputs:
|
Outputs:
|
||||||
Tensor of shape :math:`(N, T, *)`
|
Tensor of shape :math:`(N, T, *)`
|
||||||
|
|
||||||
Supported Platforms:
|
|
||||||
``Ascend`` ``GPU`` ``CPU``
|
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
TypeError: If layer is not a Cell or Primitive.
|
TypeError: If layer is not a Cell or Primitive.
|
||||||
|
|
||||||
|
Supported Platforms:
|
||||||
|
``Ascend`` ``GPU`` ``CPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> x = Tensor(np.random.random([32, 10, 3]), mindspore.float32)
|
>>> x = Tensor(np.random.random([32, 10, 3]), mindspore.float32)
|
||||||
>>> dense = nn.Dense(3, 6)
|
>>> dense = nn.Dense(3, 6)
|
||||||
|
|
|
@ -73,7 +73,7 @@ class AdaMax(Optimizer):
|
||||||
Note:
|
Note:
|
||||||
If parameters are not grouped, the `weight_decay` in optimizer will be applied on the network parameters without
|
If parameters are not grouped, the `weight_decay` in optimizer will be applied on the network parameters without
|
||||||
'beta' or 'gamma' in their names. Users can group parameters to change the strategy of decaying weight. When
|
'beta' or 'gamma' in their names. Users can group parameters to change the strategy of decaying weight. When
|
||||||
parameters are grouped, each group can set `weight_decay`, if not, the `weight_decay` in optimizer will be
|
parameters are grouped, each group can set `weight_decay`. If not, the `weight_decay` in optimizer will be
|
||||||
applied.
|
applied.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
|
|
|
@ -243,7 +243,7 @@ class Bijector(Cell):
|
||||||
*args (list): the list of positional arguments forwarded to subclasses.
|
*args (list): the list of positional arguments forwarded to subclasses.
|
||||||
**kwargs (dict): the dictionary of keyword arguments forwarded to subclasses.
|
**kwargs (dict): the dictionary of keyword arguments forwarded to subclasses.
|
||||||
|
|
||||||
Output:
|
Returns:
|
||||||
Tensor, the value of the transformed random variable.
|
Tensor, the value of the transformed random variable.
|
||||||
"""
|
"""
|
||||||
return self._forward(value, *args, **kwargs)
|
return self._forward(value, *args, **kwargs)
|
||||||
|
@ -257,7 +257,7 @@ class Bijector(Cell):
|
||||||
*args (list): the list of positional arguments forwarded to subclasses.
|
*args (list): the list of positional arguments forwarded to subclasses.
|
||||||
**kwargs (dict): the dictionary of keyword arguments forwarded to subclasses.
|
**kwargs (dict): the dictionary of keyword arguments forwarded to subclasses.
|
||||||
|
|
||||||
Output:
|
Returns:
|
||||||
Tensor, the value of the input random variable.
|
Tensor, the value of the input random variable.
|
||||||
"""
|
"""
|
||||||
return self._inverse(value, *args, **kwargs)
|
return self._inverse(value, *args, **kwargs)
|
||||||
|
@ -271,7 +271,7 @@ class Bijector(Cell):
|
||||||
*args (list): the list of positional arguments forwarded to subclasses.
|
*args (list): the list of positional arguments forwarded to subclasses.
|
||||||
**kwargs (dict): the dictionary of keyword arguments forwarded to subclasses.
|
**kwargs (dict): the dictionary of keyword arguments forwarded to subclasses.
|
||||||
|
|
||||||
Output:
|
Returns:
|
||||||
Tensor, the value of logarithm of the derivative of the forward transformation.
|
Tensor, the value of logarithm of the derivative of the forward transformation.
|
||||||
"""
|
"""
|
||||||
return self._forward_log_jacobian(value, *args, **kwargs)
|
return self._forward_log_jacobian(value, *args, **kwargs)
|
||||||
|
@ -285,7 +285,7 @@ class Bijector(Cell):
|
||||||
*args (list): the list of positional arguments forwarded to subclasses.
|
*args (list): the list of positional arguments forwarded to subclasses.
|
||||||
**kwargs (dict): the dictionary of keyword arguments forwarded to subclasses.
|
**kwargs (dict): the dictionary of keyword arguments forwarded to subclasses.
|
||||||
|
|
||||||
Output:
|
Returns:
|
||||||
Tensor, the value of logarithm of the derivative of the inverse transformation.
|
Tensor, the value of logarithm of the derivative of the inverse transformation.
|
||||||
"""
|
"""
|
||||||
return self._inverse_log_jacobian(value, *args, **kwargs)
|
return self._inverse_log_jacobian(value, *args, **kwargs)
|
||||||
|
|
|
@ -32,9 +32,6 @@ class GumbelCDF(Bijector):
|
||||||
scale (float, list, numpy.ndarray, Tensor): The scale. Default: 1.0.
|
scale (float, list, numpy.ndarray, Tensor): The scale. Default: 1.0.
|
||||||
name (str): The name of the Bijector. Default: 'GumbelCDF'.
|
name (str): The name of the Bijector. Default: 'GumbelCDF'.
|
||||||
|
|
||||||
Supported Platforms:
|
|
||||||
``Ascend`` ``GPU``
|
|
||||||
|
|
||||||
Note:
|
Note:
|
||||||
`scale` must be greater than zero.
|
`scale` must be greater than zero.
|
||||||
For `inverse` and `inverse_log_jacobian`, input should be in range of (0, 1).
|
For `inverse` and `inverse_log_jacobian`, input should be in range of (0, 1).
|
||||||
|
@ -46,6 +43,9 @@ class GumbelCDF(Bijector):
|
||||||
TypeError: When the dtype of `loc` or `scale` is not float,
|
TypeError: When the dtype of `loc` or `scale` is not float,
|
||||||
or when the dtype of `loc` and `scale` is not same.
|
or when the dtype of `loc` and `scale` is not same.
|
||||||
|
|
||||||
|
Supported Platforms:
|
||||||
|
``Ascend`` ``GPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> import mindspore
|
>>> import mindspore
|
||||||
>>> import mindspore.nn as nn
|
>>> import mindspore.nn as nn
|
||||||
|
|
|
@ -37,9 +37,6 @@ class PowerTransform(Bijector):
|
||||||
power (float, list, numpy.ndarray, Tensor): The scale factor. Default: 0.
|
power (float, list, numpy.ndarray, Tensor): The scale factor. Default: 0.
|
||||||
name (str): The name of the bijector. Default: 'PowerTransform'.
|
name (str): The name of the bijector. Default: 'PowerTransform'.
|
||||||
|
|
||||||
Supported Platforms:
|
|
||||||
``Ascend`` ``GPU``
|
|
||||||
|
|
||||||
Note:
|
Note:
|
||||||
The dtype of `power` must be float.
|
The dtype of `power` must be float.
|
||||||
|
|
||||||
|
@ -47,6 +44,9 @@ class PowerTransform(Bijector):
|
||||||
ValueError: When `power` is less than 0 or is not known statically.
|
ValueError: When `power` is less than 0 or is not known statically.
|
||||||
TypeError: When the dtype of `power` is not float.
|
TypeError: When the dtype of `power` is not float.
|
||||||
|
|
||||||
|
Supported Platforms:
|
||||||
|
``Ascend`` ``GPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> import mindspore
|
>>> import mindspore
|
||||||
>>> import mindspore.nn as nn
|
>>> import mindspore.nn as nn
|
||||||
|
|
|
@ -33,9 +33,6 @@ class ScalarAffine(Bijector):
|
||||||
shift (float, list, numpy.ndarray, Tensor): The shift factor. Default: 0.0.
|
shift (float, list, numpy.ndarray, Tensor): The shift factor. Default: 0.0.
|
||||||
name (str): The name of the bijector. Default: 'ScalarAffine'.
|
name (str): The name of the bijector. Default: 'ScalarAffine'.
|
||||||
|
|
||||||
Supported Platforms:
|
|
||||||
``Ascend`` ``GPU``
|
|
||||||
|
|
||||||
Note:
|
Note:
|
||||||
The dtype of `shift` and `scale` must be float.
|
The dtype of `shift` and `scale` must be float.
|
||||||
If `shift`, `scale` are passed in as numpy.ndarray or tensor, they have to have
|
If `shift`, `scale` are passed in as numpy.ndarray or tensor, they have to have
|
||||||
|
@ -45,6 +42,9 @@ class ScalarAffine(Bijector):
|
||||||
TypeError: When the dtype of `shift` or `scale` is not float,
|
TypeError: When the dtype of `shift` or `scale` is not float,
|
||||||
and when the dtype of `shift` and `scale` is not same.
|
and when the dtype of `shift` and `scale` is not same.
|
||||||
|
|
||||||
|
Supported Platforms:
|
||||||
|
``Ascend`` ``GPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> import mindspore
|
>>> import mindspore
|
||||||
>>> import mindspore.nn as nn
|
>>> import mindspore.nn as nn
|
||||||
|
|
|
@ -34,15 +34,15 @@ class Softplus(Bijector):
|
||||||
sharpness (float, list, numpy.ndarray, Tensor): The scale factor. Default: 1.0.
|
sharpness (float, list, numpy.ndarray, Tensor): The scale factor. Default: 1.0.
|
||||||
name (str): The name of the Bijector. Default: 'Softplus'.
|
name (str): The name of the Bijector. Default: 'Softplus'.
|
||||||
|
|
||||||
Supported Platforms:
|
|
||||||
``Ascend`` ``GPU``
|
|
||||||
|
|
||||||
Note:
|
Note:
|
||||||
The dtype of `sharpness` must be float.
|
The dtype of `sharpness` must be float.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
TypeError: When the dtype of the sharpness is not float.
|
TypeError: When the dtype of the sharpness is not float.
|
||||||
|
|
||||||
|
Supported Platforms:
|
||||||
|
``Ascend`` ``GPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> import mindspore
|
>>> import mindspore
|
||||||
>>> import mindspore.nn as nn
|
>>> import mindspore.nn as nn
|
||||||
|
|
|
@ -34,9 +34,6 @@ class Bernoulli(Distribution):
|
||||||
dtype (mindspore.dtype): The type of the event samples. Default: mstype.int32.
|
dtype (mindspore.dtype): The type of the event samples. Default: mstype.int32.
|
||||||
name (str): The name of the distribution. Default: 'Bernoulli'.
|
name (str): The name of the distribution. Default: 'Bernoulli'.
|
||||||
|
|
||||||
Supported Platforms:
|
|
||||||
``Ascend`` ``GPU``
|
|
||||||
|
|
||||||
Note:
|
Note:
|
||||||
`probs` must be a proper probability (0 < p < 1).
|
`probs` must be a proper probability (0 < p < 1).
|
||||||
`dist_spec_args` is `probs`.
|
`dist_spec_args` is `probs`.
|
||||||
|
@ -44,6 +41,9 @@ class Bernoulli(Distribution):
|
||||||
Raises:
|
Raises:
|
||||||
ValueError: When p <= 0 or p >=1.
|
ValueError: When p <= 0 or p >=1.
|
||||||
|
|
||||||
|
Supported Platforms:
|
||||||
|
``Ascend`` ``GPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> import mindspore
|
>>> import mindspore
|
||||||
>>> import mindspore.nn as nn
|
>>> import mindspore.nn as nn
|
||||||
|
|
|
@ -43,9 +43,6 @@ class Beta(Distribution):
|
||||||
dtype (mindspore.dtype): The type of the event samples. Default: mstype.float32.
|
dtype (mindspore.dtype): The type of the event samples. Default: mstype.float32.
|
||||||
name (str): The name of the distribution. Default: 'Beta'.
|
name (str): The name of the distribution. Default: 'Beta'.
|
||||||
|
|
||||||
Supported Platforms:
|
|
||||||
``Ascend``
|
|
||||||
|
|
||||||
Note:
|
Note:
|
||||||
`concentration1` and `concentration0` must be greater than zero.
|
`concentration1` and `concentration0` must be greater than zero.
|
||||||
`dist_spec_args` are `concentration1` and `concentration0`.
|
`dist_spec_args` are `concentration1` and `concentration0`.
|
||||||
|
@ -55,6 +52,9 @@ class Beta(Distribution):
|
||||||
ValueError: When concentration1 <= 0 or concentration0 >=1.
|
ValueError: When concentration1 <= 0 or concentration0 >=1.
|
||||||
TypeError: When the input `dtype` is not a subclass of float.
|
TypeError: When the input `dtype` is not a subclass of float.
|
||||||
|
|
||||||
|
Supported Platforms:
|
||||||
|
``Ascend``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> import mindspore
|
>>> import mindspore
|
||||||
>>> import mindspore.nn as nn
|
>>> import mindspore.nn as nn
|
||||||
|
|
|
@ -207,7 +207,7 @@ class ConvertModelUtils:
|
||||||
will be overwritten. Default: False.
|
will be overwritten. Default: False.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
model (Object): High-Level API for Training.
|
model (Object), High-Level API for Training.
|
||||||
|
|
||||||
Supported Platforms:
|
Supported Platforms:
|
||||||
``Ascend`` ``GPU``
|
``Ascend`` ``GPU``
|
||||||
|
|
Loading…
Reference in New Issue