diff --git a/docs/api/api_python/mindspore.nn/mindspore.nn.ForwardValueAndGrad.txt b/docs/api/api_python/mindspore.nn/mindspore.nn.ForwardValueAndGrad.txt index ee17eb5c2a7..e026786402f 100644 --- a/docs/api/api_python/mindspore.nn/mindspore.nn.ForwardValueAndGrad.txt +++ b/docs/api/api_python/mindspore.nn/mindspore.nn.ForwardValueAndGrad.txt @@ -1,61 +1,59 @@ -Class mindspore.nn.ForwardValueAndGrad(network, weights=None, get_all=False, get_by_list=False, sens_param=False) +mindspore.nn.ForwardValueAndGrad +=================================== + +.. py:class:: mindspore.nn.ForwardValueAndGrad(network, weights=None, get_all=False, get_by_list=False, sens_param=False) 网络训练包类。 包括正向网络和梯度函数。该类生成的Cell使用'\*inputs'输入来训练。 通过梯度函数来创建反向图,用以计算梯度。 - 参数: - network (Cell):训练网络。 - weights (ParameterTuple):训练网络中需要计算梯度的的参数。 - get_all (bool):如果为True,则计算网络输入对应的梯度。默认值:False。 - get_by_list (bool):如果为True,则计算参数变量对应的梯度。 - 如果get_all和get_by_list都为False,则计算第一个输入对应的梯度。 - 如果get_all和get_by_list都为True,则以((输入的梯度),(参数的梯度))的形式同时获取输入和参数变量的梯度。 - - 默认值:False。 - sens_param (bool):是否将sens作为输入。 - 如果sens_param为False,则sens默认为'ones_like(outputs)'。 - 默认值:False。 - 如果sens_param为True,则需要指定sens的值。 + **参数:** + + - **network** (Cell) - 训练网络。 + - **weights** (ParameterTuple) - 训练网络中需要计算梯度的的参数。 + - **get_all** (bool) - 如果为True,则计算网络输入对应的梯度。默认值:False。 + - **get_by_list** (bool) - 如果为True,则计算参数变量对应的梯度。如果 `get_all` 和 `get_by_list` 都为False,则计算第一个输入对应的梯度。如果 `get_all` 和 `get_by_list` 都为True,则以((输入的梯度),(参数的梯度))的形式同时获取输入和参数变量的梯度。默认值:False。 + - **sens_param** (bool) - 是否将sens作为输入。如果 `sens_param` 为False,则sens默认为'ones_like(outputs)'。默认值:False。如果 `sens_param` 为True,则需要指定sens的值。 + **输入:** + + - **(\*inputs)** (Tuple(Tensor...)):shape为 :math:`(N, \ldots)` 的输入tuple。 + - **(sens)**:反向传播梯度的缩放值。如果网络有单个输出,则sens是tensor。如果网络有多个输出,则sens是tuple(tensor)。 - 输入: - - **(\*inputs)** (Tuple(Tensor...)):shape为:math:`(N, \ldots)`的输入tuple。 - - **(sens)**:反向传播梯度的缩放值。 - 如果网络有单个输出,则sens是tensor。 - 如果网络有多个输出,则sens是tuple(tensor)。 + **输出:** - 输出: - - **forward value**:网络运行的正向结果。 - - **gradients** (tuple(tensor)):网络反向传播的梯度。 + - **forward value** - 网络运行的正向结果。 + - **gradients** (tuple(tensor)) - 网络反向传播的梯度。 - 支持平台: - ``Ascend`` ``GPU`` ``CPU`` + **支持平台:** - 示例: - >>> class Net(nn.Cell): - ... def __init__(self): - ... super(Net, self).__init__() - ... self.weight = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="weight") - ... self.matmul = P.MatMul() - ... - ... def construct(self, x): - ... out = self.matmul(x, self.weight) - ... return out - ... - >>> net = Net() - >>> criterion = nn.SoftmaxCrossEntropyWithLogits() - >>> net_with_criterion = nn.WithLossCell(net, criterion) - >>> weight = ParameterTuple(net.trainable_params()) - >>> train_network = nn.ForwardValueAndGrad(net_with_criterion, weights=weight, get_all=True, get_by_list=True) - >>> inputs = Tensor(np.ones([1, 2]).astype(np.float32)) - >>> labels = Tensor(np.zeros([1, 2]).astype(np.float32)) - >>> result = train_network(inputs, labels) - >>> print(result) - (Tensor(shape=[1], dtype=Float32, value=[0.00000000e+00]), ((Tensor(shape=[1, 2], dtype=Float32, value= - [[1.00000000e+00, 1.00000000e+00]]), Tensor(shape=[1, 2], dtype=Float32, value= - [[0.00000000e+00, 0.00000000e+00]])), (Tensor(shape=[2, 2], dtype=Float32, value= - [[5.00000000e-01, 5.00000000e-01], - [5.00000000e-01, 5.00000000e-01]]),))) + ``Ascend`` ``GPU`` ``CPU`` + + **样例:** + + >>> class Net(nn.Cell): + ... def __init__(self): + ... super(Net, self).__init__() + ... self.weight = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="weight") + ... self.matmul = P.MatMul() + ... + ... def construct(self, x): + ... out = self.matmul(x, self.weight) + ... return out + ... + >>> net = Net() + >>> criterion = nn.SoftmaxCrossEntropyWithLogits() + >>> net_with_criterion = nn.WithLossCell(net, criterion) + >>> weight = ParameterTuple(net.trainable_params()) + >>> train_network = nn.ForwardValueAndGrad(net_with_criterion, weights=weight, get_all=True, get_by_list=True) + >>> inputs = Tensor(np.ones([1, 2]).astype(np.float32)) + >>> labels = Tensor(np.zeros([1, 2]).astype(np.float32)) + >>> result = train_network(inputs, labels) + >>> print(result) + (Tensor(shape=[1], dtype=Float32, value=[0.00000000e+00]), ((Tensor(shape=[1, 2], dtype=Float32, value= + [[1.00000000e+00, 1.00000000e+00]]), Tensor(shape=[1, 2], dtype=Float32, value= + [[0.00000000e+00, 0.00000000e+00]])), (Tensor(shape=[2, 2], dtype=Float32, value= + [[5.00000000e-01, 5.00000000e-01], + [5.00000000e-01, 5.00000000e-01]]),))) \ No newline at end of file diff --git a/docs/api/api_python/mindspore.train/mindspore.train.SummaryCollector.rst b/docs/api/api_python/mindspore.train/mindspore.train.SummaryCollector.rst index 6d6178f94bc..fa1f4ba3435 100644 --- a/docs/api/api_python/mindspore.train/mindspore.train.SummaryCollector.rst +++ b/docs/api/api_python/mindspore.train/mindspore.train.SummaryCollector.rst @@ -1,3 +1,6 @@ +mindspore.train.callback.SummaryCollector +========================================== + .. py:class:: mindspore.train.callback.SummaryCollector(summary_dir, collect_freq=10, collect_specified_data=None, keep_default_action=True, custom_lineage_data=None, collect_tensor_freq=None, max_file_size=None, export_options=None) SummaryCollector可以收集一些常用信息。 diff --git a/docs/api/api_python/mindspore.train/mindspore.train.callback.Callback.rst b/docs/api/api_python/mindspore.train/mindspore.train.callback.Callback.rst index 13dca1b62a8..9f26f0d742e 100644 --- a/docs/api/api_python/mindspore.train/mindspore.train.callback.Callback.rst +++ b/docs/api/api_python/mindspore.train/mindspore.train.callback.Callback.rst @@ -1,3 +1,6 @@ +mindspore.train.callback.Callback +=================================== + .. py:class:: mindspore.train.callback.Callback 用于构建回调函数的基类。回调函数是一个上下文管理器,在运行模型时被调用。 diff --git a/docs/api/api_python/mindspore.train/mindspore.train.callback.CheckpointConfig.rst b/docs/api/api_python/mindspore.train/mindspore.train.callback.CheckpointConfig.rst index f69347fc48d..a959c9481db 100644 --- a/docs/api/api_python/mindspore.train/mindspore.train.callback.CheckpointConfig.rst +++ b/docs/api/api_python/mindspore.train/mindspore.train.callback.CheckpointConfig.rst @@ -1,3 +1,6 @@ +mindspore.train.callback.CheckpointConfig +========================================== + .. py:class:: mindspore.train.callback.CheckpointConfig(save_checkpoint_steps=1, save_checkpoint_seconds=0, keep_checkpoint_max=5, keep_checkpoint_per_n_minutes=0, integrated_save=True, async_save=False, saved_network=None, append_info=None, enc_key=None, enc_mode='AES-GCM') 保存checkpoint时的配置策略。 diff --git a/docs/api/api_python/mindspore.train/mindspore.train.callback.LearningRateScheduler.rst b/docs/api/api_python/mindspore.train/mindspore.train.callback.LearningRateScheduler.rst index b3296f8956d..c1652cc26a6 100644 --- a/docs/api/api_python/mindspore.train/mindspore.train.callback.LearningRateScheduler.rst +++ b/docs/api/api_python/mindspore.train/mindspore.train.callback.LearningRateScheduler.rst @@ -1,3 +1,6 @@ +mindspore.train.callback.LearningRateScheduler +=============================================== + .. py:class:: mindspore.train.callback.LearningRateScheduler(learning_rate_function) 在训练期间更改学习率。 diff --git a/docs/api/api_python/mindspore.train/mindspore.train.callback.LossMonitor.rst b/docs/api/api_python/mindspore.train/mindspore.train.callback.LossMonitor.rst index e64bae7ce4f..7bdfd17f868 100644 --- a/docs/api/api_python/mindspore.train/mindspore.train.callback.LossMonitor.rst +++ b/docs/api/api_python/mindspore.train/mindspore.train.callback.LossMonitor.rst @@ -1,3 +1,6 @@ +mindspore.train.callback.LossMonitor +======================================= + .. py:class:: mindspore.train.callback.LossMonitor(per_print_times=1) 监控训练的loss。 diff --git a/docs/api/api_python/mindspore.train/mindspore.train.callback.ModelCheckpoint.rst b/docs/api/api_python/mindspore.train/mindspore.train.callback.ModelCheckpoint.rst index 8c5b78a3d5a..e7f62d769b1 100644 --- a/docs/api/api_python/mindspore.train/mindspore.train.callback.ModelCheckpoint.rst +++ b/docs/api/api_python/mindspore.train/mindspore.train.callback.ModelCheckpoint.rst @@ -1,3 +1,6 @@ +mindspore.train.callback.ModelCheckpoint +========================================== + .. py:class:: mindspore.train.callback.ModelCheckpoint(prefix='CKP', directory=None, config=None) checkpoint的回调函数。 diff --git a/docs/api/api_python/mindspore.train/mindspore.train.callback.RunContext.rst b/docs/api/api_python/mindspore.train/mindspore.train.callback.RunContext.rst index 36014f86238..d4c01ad0c0d 100644 --- a/docs/api/api_python/mindspore.train/mindspore.train.callback.RunContext.rst +++ b/docs/api/api_python/mindspore.train/mindspore.train.callback.RunContext.rst @@ -1,3 +1,6 @@ +mindspore.train.callback.RunContext +==================================== + .. py:class:: mindspore.train.callback.RunContext(original_args) 提供模型的相关信息。 diff --git a/docs/api/api_python/mindspore.train/mindspore.train.callback.TimeMonitor.rst b/docs/api/api_python/mindspore.train/mindspore.train.callback.TimeMonitor.rst index b9c0f521eb2..fd2e1e9d83a 100644 --- a/docs/api/api_python/mindspore.train/mindspore.train.callback.TimeMonitor.rst +++ b/docs/api/api_python/mindspore.train/mindspore.train.callback.TimeMonitor.rst @@ -1,3 +1,6 @@ +mindspore.train.callback.TimeMonitor +===================================== + .. py:class:: mindspore.train.callback.TimeMonitor(data_size=None) 监控训练时间。 diff --git a/docs/api/api_python/mindspore.train/mindspore.train.summary.SummaryRecord.rst b/docs/api/api_python/mindspore.train/mindspore.train.summary.SummaryRecord.rst index 52c7c06ab7d..c28de7dcbb2 100644 --- a/docs/api/api_python/mindspore.train/mindspore.train.summary.SummaryRecord.rst +++ b/docs/api/api_python/mindspore.train/mindspore.train.summary.SummaryRecord.rst @@ -1,3 +1,6 @@ +mindspore.train.summary.SummaryRecord +======================================= + .. py:class:: mindspore.train.summary.SummaryRecord(log_dir, file_prefix='events', file_suffix='_MS', network=None, max_file_size=None, raise_exception=False, export_options=None) SummaryRecord用于记录summary数据和lineage数据。 @@ -76,7 +79,6 @@ ... with SummaryRecord(log_dir="./summary_dir", file_prefix="xx_", file_suffix="_yy") as summary_record: ... summary_record.add_value('scalar', 'loss', Tensor(0.1)) - .. py:method:: close() 将所有事件持久化并关闭SummaryRecord。请使用with语句或try…finally语句进行自动关闭。 @@ -90,7 +92,6 @@ ... finally: ... summary_record.close() - .. py:method:: flush() 将事件文件持久化到磁盘。 @@ -104,7 +105,6 @@ ... with SummaryRecord(log_dir="./summary_dir", file_prefix="xx_", file_suffix="_yy") as summary_record: ... summary_record.flush() - .. py:method:: log_dir :property: @@ -121,7 +121,6 @@ ... with SummaryRecord(log_dir="./summary_dir", file_prefix="xx_", file_suffix="_yy") as summary_record: ... log_dir = summary_record.log_dir - .. py:method:: record(step, train_network=None, plugin_filter=None) 记录summary。 @@ -150,7 +149,6 @@ ... True - .. py:method:: set_mode(mode) 设置训练阶段。不同的训练阶段会影响数据记录。 diff --git a/docs/api/api_python/mindspore/mindspore.build_searched_strategy.rst b/docs/api/api_python/mindspore/mindspore.build_searched_strategy.rst index 0b826d887d1..ef355bef2f6 100644 --- a/docs/api/api_python/mindspore/mindspore.build_searched_strategy.rst +++ b/docs/api/api_python/mindspore/mindspore.build_searched_strategy.rst @@ -1,7 +1,7 @@ mindspore.build_searched_strategy -================================== +======================================= -.. py:method:: mindspore.build_searched_strategy(strategy_filename) +.. py:class:: mindspore.build_searched_strategy(strategy_filename) 构建网络中每个参数的策略,用于分布式推理。关于它的使用细节,请参考:`保存和加载模型(HyBrid Parallel模式) `_。 diff --git a/docs/api/api_python/mindspore/mindspore.dtype_to_nptype.rst b/docs/api/api_python/mindspore/mindspore.dtype_to_nptype.rst index e6054e251be..2473331e728 100644 --- a/docs/api/api_python/mindspore/mindspore.dtype_to_nptype.rst +++ b/docs/api/api_python/mindspore/mindspore.dtype_to_nptype.rst @@ -7,8 +7,8 @@ mindspore.dtype_to_nptype **参数:** - **type_** (mindspore.dtype) – MindSpore中的dtype。 + **type_** (mindspore.dtype) – MindSpore中的dtype。 **返回:** - NumPy的数据类型。 \ No newline at end of file + NumPy的数据类型。 \ No newline at end of file diff --git a/docs/api/api_python/mindspore/mindspore.set_seed.rst b/docs/api/api_python/mindspore/mindspore.set_seed.rst index 202e101b18b..123beb20843 100644 --- a/docs/api/api_python/mindspore/mindspore.set_seed.rst +++ b/docs/api/api_python/mindspore/mindspore.set_seed.rst @@ -6,10 +6,9 @@ mindspore.set_seed 设置全局种子。 .. note:: - - 全局种子可用于numpy.random,mindspore.common.Initializer,mindspore.ops.composite.random_ops以及mindspore.nn.probability.distribution。 - 如果没有设置全局种子,这些包将会各自使用自己的种子,numpy.random和mindspore.common.Initializer将会随机选择种子值,mindspore.ops.composite.random_ops和mindspore.nn.probability.distribution将会使用零作为种子值。 - numpy.random.seed()设置的种子仅能被numpy.random使用,而这个API设置的种子也可被numpy.random使用,因此推荐使用这个API设置所有的种子。 + - 全局种子可用于numpy.random,mindspore.common.Initializer,mindspore.ops.composite.random_ops以及mindspore.nn.probability.distribution。 + - 如果没有设置全局种子,这些包将会各自使用自己的种子,numpy.random和mindspore.common.Initializer将会随机选择种子值,mindspore.ops.composite.random_ops和mindspore.nn.probability.distribution将会使用零作为种子值。 + - numpy.random.seed()设置的种子仅能被numpy.random使用,而这个API设置的种子也可被numpy.random使用,因此推荐使用这个API设置所有的种子。 **参数:** @@ -20,33 +19,30 @@ mindspore.set_seed - **ValueError** – 种子值非法 (小于0)。 - **TypeError** – 种子值非整型数。 - **样例:** - .. code-block:: + >>> import numpy as np + >>> import mindspore as ms + >>> import mindspore.ops as ops + >>> from mindspore import Tensor, set_seed, Parameter + >>> from mindspore.common.initializer import initializer - >>> import numpy as np - >>> import mindspore as ms - >>> import mindspore.ops as ops - >>> from mindspore import Tensor, set_seed, Parameter - >>> from mindspore.common.initializer import initializer + >>> # 注意:(1)请确保代码在动态图模式下运行; + >>> # (2)由于复合级别的算子需要参数为张量类型,如以下样例, + >>> # 当使用ops.uniform这个算子,minval和maxval用以下方法初始化: + >>> minval = Tensor(1.0, ms.float32) + >>> maxval = Tensor(2.0, ms.float32) - >>> # 注意:(1)请确保代码在动态图模式下运行; - >>> # (2)由于复合级别的算子需要参数为张量类型,如以下样例, - >>> # 当使用ops.uniform这个算子,minval和maxval用以下方法初始化: - >>> minval = Tensor(1.0, ms.float32) - >>> maxval = Tensor(2.0, ms.float32) - - >>> # 1. 如果没有设置全局种子,numpy.random以及initializer将会选择随机种子: - >>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A1 - >>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A2 - >>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W1 - >>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W2 - >>> # 重新运行程序将得到不同的结果: - >>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A3 - >>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A4 - >>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W3 - >>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W4 + >>> # 1. 如果没有设置全局种子,numpy.random以及initializer将会选择随机种子: + >>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A1 + >>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A2 + >>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W1 + >>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W2 + >>> # 重新运行程序将得到不同的结果: + >>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A3 + >>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A4 + >>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W3 + >>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W4 >>> # 2. 如果设置了全局种子,numpy.random以及initializer将会使用这个种子: >>> set_seed(1234) diff --git a/docs/api/api_python/nn/mindspore.nn.Adagrad.rst b/docs/api/api_python/nn/mindspore.nn.Adagrad.rst index daec6b8a1fb..ba3a28d382a 100644 --- a/docs/api/api_python/nn/mindspore.nn.Adagrad.rst +++ b/docs/api/api_python/nn/mindspore.nn.Adagrad.rst @@ -6,7 +6,7 @@ mindspore.nn.Adagrad ʹApplyAdagradʵAdagrad㷨 AdagradѧϰŻ - `Efficient Learning using Forward-Backward Splitting `_ + `Efficient Learning using Forward-Backward Splitting `_ ʽ£ .. math:: @@ -15,8 +15,8 @@ mindspore.nn.Adagrad w_{t+1} = w_{t} - lr*\frac{1}{\sqrt{h_{t+1}}}*g \end{array} - :math:`h` ʾݶƽۻͣ:math:`g` ʾ `grads` - :math:`lr` `learning_rate`:math:`w` `params` + :math:`h` ʾݶƽۻͣ :math:`g` ʾ `grads` + :math:`lr` `learning_rate` :math:`w` `params` .. note:: ڲδʱŻõ `weight_decay` Ӧƺ"beta""gamma"ͨɵȨ˥ԡʱÿ `weight_decay` δãʹŻõ `weight_decay` @@ -34,14 +34,14 @@ mindspore.nn.Adagrad - **accum** (float) - ۼ :math:`h` ijʼֵڵ㡣Ĭֵ0.1 - **learning_rate** (Union[float, Tensor, Iterable, LearningRateSchedule]) - Ĭֵ0.001 - - **float** - ̶ѧϰʡڵ㡣 - - **int** - ̶ѧϰʡڵ㡣ͻᱻתΪ - - **Tensor** - DZһάǹ̶ѧϰʡһάǶ̬ѧϰʣiȡеiֵΪѧϰʡ - - **Iterable** - ̬ѧϰʡiȡiֵΪѧϰʡ - - **LearningRateSchedule** - ̬ѧϰʡѵУŻʹòstepΪ룬 `LearningRateSchedule` ʵ㵱ǰѧϰʡ + - **float** - ̶ѧϰʡڵ㡣 + - **int** - ̶ѧϰʡڵ㡣ͻᱻתΪ + - **Tensor** - DZһάǹ̶ѧϰʡһάǶ̬ѧϰʣiȡеiֵΪѧϰʡ + - **Iterable** - ̬ѧϰʡiȡiֵΪѧϰʡ + - **LearningRateSchedule** - ̬ѧϰʡѵУŻʹòstepΪ룬 `LearningRateSchedule` ʵ㵱ǰѧϰʡ - **update_slots** (bool) - ΪTrueۼ :math:`h` ĬֵTrue - - **loss_scale** (float) - ݶϵ0`loss_scale`תΪͨʹĬֵѵʱʹ `FixedLossScaleManager` `FixedLossScaleManager` `drop_overflow_update` ΪFalseʱֵҪ `FixedLossScaleManager` е `loss_scale` ͬйظϸϢclass`mindspore.FixedLossScaleManager` Ĭֵ1.0 + - **loss_scale** (float) - ݶϵ0 `loss_scale` תΪͨʹĬֵѵʱʹ `FixedLossScaleManager` `FixedLossScaleManager` `drop_overflow_update` ΪFalseʱֵҪ `FixedLossScaleManager` е `loss_scale` ͬйظϸϢ :class:`mindspore.FixedLossScaleManager` Ĭֵ1.0 - **weight_decay** (Union[float, int]) - ҪȨصȨ˥ֵڵ0.0Ĭֵ0.0 **룺** diff --git a/docs/api/api_python/nn/mindspore.nn.Adam.rst b/docs/api/api_python/nn/mindspore.nn.Adam.rst index 2c9ff581a4d..c20ce158ddd 100644 --- a/docs/api/api_python/nn/mindspore.nn.Adam.rst +++ b/docs/api/api_python/nn/mindspore.nn.Adam.rst @@ -5,7 +5,7 @@ mindspore.nn.Adam ͨAdaptive Moment Estimation (Adam)㷨ݶȡ - `Adam: A Method for Stochastic Optimization `_ + `Adam: A Method for Stochastic Optimization `_ ʽ£ @@ -17,7 +17,7 @@ mindspore.nn.Adam w_{t+1} = w_{t} - l * \frac{m_{t+1}}{\sqrt{v_{t+1}} + \epsilon} \end{array} - :math:`m` һ `moment1` :math:`v` ڶ `moment2` :math:`g` `gradients` :math:`l` ӣ:math:`\beta_1,\beta_2` `beta1` `beta2` :math:`t` ²裬:math:`beta_1^t` :math:`beta_2^t` `beta1_power` `beta2_power` :math:`\alpha` `learning_rate` :math:`w` `params` :math:`\epsilon` `eps` + :math:`m` һ `moment1` :math:`v` ڶ `moment2` :math:`g` `gradients` :math:`l` ӣ:math:`\beta_1,\beta_2` `beta1` `beta2` :math:`t` ²裬:math:`beta_1^t` :math:`beta_2^t` `beta1_power` `beta2_power` :math:`\alpha` `learning_rate` :math:`w` `params` :math:`\epsilon` `eps` .. note:: ǰʹSparseGatherV2ӣŻִϡ㣬ͨ `target` ΪCPUhostϽϡ㡣 @@ -25,7 +25,6 @@ mindspore.nn.Adam ڲδʱŻõ `weight_decay` Ӧƺ"beta""gamma"ͨɵȨ˥ԡʱÿ `weight_decay` δãʹŻõ `weight_decay` - **** - **params** (Union[list[Parameter], list[dict]]) - `Parameter` ɵбֵɵббԪֵʱֵļ"params""lr""weight_decay""grad_centralization""order_params" @@ -36,7 +35,7 @@ mindspore.nn.Adam - **grad_centralization** - ѡд"grad_centralization"ʹöӦֵֵΪ͡ûУΪ `grad_centralization` ΪFalseòھ㡣 - **order_params** - ѡӦֵԤڵIJ˳򡣵ʹò鹦ʱͨʹø `parameters` ˳ܡд"order_params"Ըе"order_params"еIJijһ `params` С - - **learning_rate (Union[float, Tensor, Iterable, LearningRateSchedule]): Ĭֵ1e-3 + - **learning_rate** (Union[float, Tensor, Iterable, LearningRateSchedule]): Ĭֵ1e-3 - **float** - ̶ѧϰʡڵ㡣 - **int** - ̶ѧϰʡڵ㡣ͻᱻתΪ @@ -50,7 +49,7 @@ mindspore.nn.Adam - **use_locking** (bool) - ǷԲ¼ΪTrue `w` `m` `v` tensor½ܵıΪFalseԤ⡣ĬֵFalse - **use_nesterov** (bool) - ǷʹNesterov Accelerated Gradient (NAG)㷨ݶȡΪTrueʹNAGݶȡΪFalseڲʹNAG¸ݶȡĬֵFalse - **weight_decay** (float) - Ȩ˥L2 penaltyڵ0Ĭֵ0.0 - - **loss_scale** (float) - ݶϵ0 `loss_scale` תΪͨʹĬֵѵʱʹ `FixedLossScaleManager` `FixedLossScaleManager` `drop_overflow_update` ΪFalseʱֵҪ `FixedLossScaleManager` е `loss_scale` ͬйظϸϢclass`mindspore.FixedLossScaleManager` Ĭֵ1.0 + - **loss_scale** (float) - ݶϵ0 `loss_scale` תΪͨʹĬֵѵʱʹ `FixedLossScaleManager` `FixedLossScaleManager` `drop_overflow_update` ΪFalseʱֵҪ `FixedLossScaleManager` е `loss_scale` ͬйظϸϢ :class:`mindspore.FixedLossScaleManager` Ĭֵ1.0 **룺** diff --git a/docs/api/api_python/nn/mindspore.nn.Momentum.rst b/docs/api/api_python/nn/mindspore.nn.Momentum.rst index 97abc5359de..b9e63f2b16a 100644 --- a/docs/api/api_python/nn/mindspore.nn.Momentum.rst +++ b/docs/api/api_python/nn/mindspore.nn.Momentum.rst @@ -20,22 +20,22 @@ mindspore.nn.Momentum .. math:: p_{t+1} = p_{t} - lr \ast v_{t+1} - У:math:`grad` :math:`lr` :math:`p` :math:`v` :math:`u` ֱʾݶȡѧϰʡأMomentͶMomentum + У:math:`grad` :math:`lr` :math:`p` :math:`v` :math:`u` ֱʾݶȡѧϰʡأMomentͶMomentum .. note:: ڲδʱŻõ `weight_decay` Ӧƺ"beta""gamma"ͨɵȨ˥ԡʱÿ `weight_decay` δãʹŻõ `weight_decay` **** - - **params (Union[list[Parameter], list[dict]]): `Parameter` ɵбֵɵббԪֵʱֵļ"params""lr""weight_decay""grad_centralization""order_params" + - **params** (Union[list[Parameter], list[dict]]): `Parameter` ɵбֵɵббԪֵʱֵļ"params""lr""weight_decay""grad_centralization""order_params" - -** params** - ǰȨأֵ `Parameter` б - -** lr** - ѡд"lr"ʹöӦֵΪѧϰʡûУʹŻõ `learning_rate` Ϊѧϰʡ - -** weight_decay** - ѡд"weight_decayʹöӦֵΪȨ˥ֵûУʹŻõ `weight_decay` ΪȨ˥ֵ - -** grad_centralization** - ѡд"grad_centralization"ʹöӦֵֵΪ͡ûУΪ `grad_centralization` ΪFalseòھ㡣 - -** order_params** - ѡӦֵԤڵIJ˳򡣵ʹò鹦ʱͨʹø `parameters` ˳ܡд"order_params"Ըе"order_params"еIJijһ `params` С + - ** params** - ǰȨأֵ `Parameter` б + - ** lr** - ѡд"lr"ʹöӦֵΪѧϰʡûУʹŻõ `learning_rate` Ϊѧϰʡ + - ** weight_decay** - ѡд"weight_decayʹöӦֵΪȨ˥ֵûУʹŻõ `weight_decay` ΪȨ˥ֵ + - ** grad_centralization** - ѡд"grad_centralization"ʹöӦֵֵΪ͡ûУΪ `grad_centralization` ΪFalseòھ㡣 + - ** order_params** - ѡӦֵԤڵIJ˳򡣵ʹò鹦ʱͨʹø `parameters` ˳ܡд"order_params"Ըе"order_params"еIJijһ `params` С - - **learning_rate (Union[float, int, Tensor, Iterable, LearningRateSchedule]): + - **learning_rate** (Union[float, int, Tensor, Iterable, LearningRateSchedule]): - **float** - ̶ѧϰʡڵ㡣 - **int** - ̶ѧϰʡڵ㡣ͻᱻתΪ @@ -45,7 +45,7 @@ mindspore.nn.Momentum - **momentum** (float) - ͵ijΣʾƶƽĶڻ0.0 - **weight_decay** (int, float) - Ȩ˥L2 penaltyֵڵ0.0Ĭֵ0.0 - - **loss_scale** (float) - ݶϵ0 `loss_scale` תΪͨʹĬֵѵʱʹ `FixedLossScaleManager` `FixedLossScaleManager` `drop_overflow_update` ΪFalseʱֵҪ `FixedLossScaleManager` е `loss_scale` ͬйظϸϢclass`mindspore.FixedLossScaleManager` Ĭֵ1.0 + - **loss_scale** (float) - ݶϵ0 `loss_scale` תΪͨʹĬֵѵʱʹ `FixedLossScaleManager` `FixedLossScaleManager` `drop_overflow_update` ΪFalseʱֵҪ `FixedLossScaleManager` е `loss_scale` ͬйظϸϢ :class:`mindspore.FixedLossScaleManager` Ĭֵ1.0 - **use_nesterov** (bool) - ǷʹNesterov Accelerated Gradient (NAG)㷨ݶȡĬֵFalse **룺** diff --git a/docs/api/api_python/nn/mindspore.nn.Optimizer.rst b/docs/api/api_python/nn/mindspore.nn.Optimizer.rst index 5126acdb094..dd0f6b5fcd7 100644 --- a/docs/api/api_python/nn/mindspore.nn.Optimizer.rst +++ b/docs/api/api_python/nn/mindspore.nn.Optimizer.rst @@ -12,7 +12,7 @@ mindspore.nn.Optimizer **** - - **learning_rate (Union[float, int, Tensor, Iterable, LearningRateSchedule]): + - **learning_rate** (Union[float, int, Tensor, Iterable, LearningRateSchedule]): - **float** - ̶ѧϰʡڵ㡣 - **int** - ̶ѧϰʡڵ㡣ͻᱻתΪ @@ -29,7 +29,7 @@ mindspore.nn.Optimizer - **order_params** - ѡӦֵԤڵIJ˳򡣵ʹò鹦ʱͨʹø `parameters` ˳ܡд"order_params"Ըе"order_params"еIJijһ `params` С - **weight_decay** (Union[float, int]) - Ȩ˥򸡵ֵڻ0 `weight_decay` תΪĬֵ0.0 - - **loss_scale** (float) - ݶϵ0 `loss_scale` תΪͨʹĬֵѵʱʹ `FixedLossScaleManager` `FixedLossScaleManager ` `drop_overflow_update` ΪFalseʱֵҪ `FixedLossScaleManager` е `loss_scale` ͬйظϸϢclass`mindspore.FixedLossScaleManager`Ĭֵ1.0 + - **loss_scale** (float) - ݶϵ0 `loss_scale` תΪͨʹĬֵѵʱʹ `FixedLossScaleManager` `FixedLossScaleManager` `drop_overflow_update` ΪFalseʱֵҪ `FixedLossScaleManager` е `loss_scale` ͬйظϸϢ :class:`mindspore.FixedLossScaleManager`Ĭֵ1.0 **쳣** diff --git a/docs/api/api_python/nn_probability/mindspore.nn.probability.bijector.Invert.rst b/docs/api/api_python/nn_probability/mindspore.nn.probability.bijector.Invert.rst index 35822ca111c..97a53b26c88 100644 --- a/docs/api/api_python/nn_probability/mindspore.nn.probability.bijector.Invert.rst +++ b/docs/api/api_python/nn_probability/mindspore.nn.probability.bijector.Invert.rst @@ -60,7 +60,7 @@ mindspore.nn.probability.bijector.Invert 正变换:将输入值转换为另一个分布。 - **参数:** + **参数:** **y** (Tensor) - 输入。 diff --git a/docs/api/api_python/nn_probability/mindspore.nn.probability.distribution.Distribution.rst b/docs/api/api_python/nn_probability/mindspore.nn.probability.distribution.Distribution.rst index d725a764a69..1f5385ba341 100644 --- a/docs/api/api_python/nn_probability/mindspore.nn.probability.distribution.Distribution.rst +++ b/docs/api/api_python/nn_probability/mindspore.nn.probability.distribution.Distribution.rst @@ -31,10 +31,11 @@ mindspore.nn.probability.distribution.Distribution .. py:method:: construct(name, *args, **kwargs) - 重写Cell中的`construct`。 + 重写Cell中的 `construct` 。 .. note:: - 支持的函数包括:'prob'、'log_prob'、'cdf', 'log_cdf'、'survival_function'、'log_survival'、'var'、'sd'、'mode'、'mean'、'entropy'、'kl_loss'、'cross_entropy'、'sample'、'get_dist_args'、'get_dist_type'。 + 支持的函数包括:'prob'、'log_prob'、'cdf', 'log_cdf'、'survival_function'、'log_survival'、'var'、 + 'sd'、'mode'、'mean'、'entropy'、'kl_loss'、'cross_entropy'、'sample'、'get_dist_args'、'get_dist_type'。 **参数:**