modify code display way

This commit is contained in:
huodagu 2023-01-04 16:45:22 +08:00
parent 6c9fd1bd63
commit 005dc05cf6
6 changed files with 15 additions and 7 deletions

View File

@ -6,6 +6,10 @@ ReWrite完整示例请参考
`rewrite_example.py <https://gitee.com/mindspore/mindspore/tree/master/docs/api/api_python/rewrite_example.py>`_ `rewrite_example.py <https://gitee.com/mindspore/mindspore/tree/master/docs/api/api_python/rewrite_example.py>`_
该样例代码的主要功能包括怎么通过网络创建SymbolTree并且对SymbolTree中的节点进行插入删除替换等操作其中还包含了对子网络的修改和通过模式匹配进行节点替换。 该样例代码的主要功能包括怎么通过网络创建SymbolTree并且对SymbolTree中的节点进行插入删除替换等操作其中还包含了对子网络的修改和通过模式匹配进行节点替换。
.. literalinclude:: rewrite_example.py
:language: python
:start-at: import
.. py:class:: mindspore.rewrite.SymbolTree(handler: SymbolTreeImpl) .. py:class:: mindspore.rewrite.SymbolTree(handler: SymbolTreeImpl)
SymbolTree通常对应于网络的前向计算过程。 SymbolTree通常对应于网络的前向计算过程。
@ -483,7 +487,7 @@ ReWrite完整示例请参考
- **RuntimeError** - 如果参数 `node` 不是 NodeType.Tree类型。 - **RuntimeError** - 如果参数 `node` 不是 NodeType.Tree类型。
- **TypeError** - 如果参数 `node` 不是Node类型实例。 - **TypeError** - 如果参数 `node` 不是Node类型实例。
.. py:method:: mindspore.rewrite.sparsify(f, arg_types, sparse_rules=None) .. py:function:: mindspore.rewrite.sparsify(f, arg_types, sparse_rules=None)
模型自动稀疏化接口,将稠密模型转换为稀疏模型。通过 `arg_types` 指定的参数类型,将稀疏参数在模型中传导,并调用相应的稀疏函数。 模型自动稀疏化接口,将稠密模型转换为稀疏模型。通过 `arg_types` 指定的参数类型,将稀疏参数在模型中传导,并调用相应的稀疏函数。

View File

@ -47,7 +47,7 @@ mindspore.nn.thor
- **decay_filter** (function) - 用于确定权重衰减应用于哪些层的函数只有在weight_decay>0时才有效。默认值lambda x: x.name not in []。 - **decay_filter** (function) - 用于确定权重衰减应用于哪些层的函数只有在weight_decay>0时才有效。默认值lambda x: x.name not in []。
- **split_indices** (list) - 按A/G层A/G含义见上述公式索引设置allreduce融合策略。仅在分布式计算中有效。ResNet50作为一个样本A/G的层数分别为54层当split_indices设置为[26,53]时表示A/G被分成两组allreduce一组为0~26层另一组是27~53层。默认值None。 - **split_indices** (list) - 按A/G层A/G含义见上述公式索引设置allreduce融合策略。仅在分布式计算中有效。ResNet50作为一个样本A/G的层数分别为54层当split_indices设置为[26,53]时表示A/G被分成两组allreduce一组为0~26层另一组是27~53层。默认值None。
- **enable_clip_grad** (bool) - 是否剪切梯度。默认值False。 - **enable_clip_grad** (bool) - 是否剪切梯度。默认值False。
- **frequency** (int) - A/G和$A^{-1}/G^{-1}$的更新间隔。每隔frequency个stepA/G和$A^{-1}/G^{-1}$将更新一次。必须大于1。默认值100。 - **frequency** (int) - A/G和 :math:`A^{-1}/G^{-1}` 的更新间隔。当frequency等于N(N必须大于1)每隔frequency个stepA/G和 :math:`A^{-1}/G^{-1}` 将更新一次。其他step将使用之前的A/G和 :math:`A^{-1}/G^{-1}` 来更新权重。默认值100。
输入: 输入:
- **gradients** tuple[Tensor] - 训练参数的梯度,矩阵维度与训练参数相同。 - **gradients** tuple[Tensor] - 训练参数的梯度,矩阵维度与训练参数相同。

View File

@ -5,5 +5,9 @@ For a complete ReWrite example, refer to
`rewrite_example.py <https://gitee.com/mindspore/mindspore/tree/master/docs/api/api_python_en/rewrite_example.py>`_ `rewrite_example.py <https://gitee.com/mindspore/mindspore/tree/master/docs/api/api_python_en/rewrite_example.py>`_
The main functions of the sample code include: how to create a SymbolTree through the network, and how to insert, delete, and replace the nodes in the SymbolTree. It also includes the modification of the subnet and node replacement through pattern matching. The main functions of the sample code include: how to create a SymbolTree through the network, and how to insert, delete, and replace the nodes in the SymbolTree. It also includes the modification of the subnet and node replacement through pattern matching.
.. literalinclude:: rewrite_example.py
:language: python
:start-at: import
.. automodule:: mindspore.rewrite .. automodule:: mindspore.rewrite
:members: :members:

View File

@ -340,7 +340,7 @@ class DATASET_API DataHelper {
} }
/// \brief Write pointer to bin, use pointer to avoid memcpy /// \brief Write pointer to bin, use pointer to avoid memcpy
/// \note The value of `length`` must be equal to the length of `data` /// \note The value of `length` must be equal to the length of `data`
/// \param[in] in_file File name to write to /// \param[in] in_file File name to write to
/// \param[in] data Pointer to data /// \param[in] data Pointer to data
/// \param[in] length Length of values to write from pointer /// \param[in] length Length of values to write from pointer

View File

@ -24,7 +24,7 @@
namespace mindspore { namespace mindspore {
namespace ops { namespace ops {
constexpr auto kNameExpandDims = "ExpandDims"; constexpr auto kNameExpandDims = "ExpandDims";
/// \brief Adds an additional dimension to input_x` at the given axis. /// \brief Adds an additional dimension to `input_x` at the given axis.
/// Refer to Python API @ref mindspore.ops.ExpandDims for more details. /// Refer to Python API @ref mindspore.ops.ExpandDims for more details.
class MIND_API ExpandDims : public BaseOperator { class MIND_API ExpandDims : public BaseOperator {
public: public:

View File

@ -314,9 +314,9 @@ def thor(net, learning_rate, damping, momentum, weight_decay=0.0, loss_scale=1.0
enable_clip_grad (bool): Whether to clip the gradients. Default: False enable_clip_grad (bool): Whether to clip the gradients. Default: False
frequency(int): The update interval of A/G and $A^{-1}/G^{-1}$. When frequency equals N (N is greater than 1), frequency(int): The update interval of A/G and :math:`A^{-1}/G^{-1}`. When frequency equals N
A/G and $A^{-1}/G^{-1}$ will be updated every N steps, and other steps will use the stale A/G and (N is greater than 1), A/G and :math:`A^{-1}/G^{-1}` will be updated every N steps,
$A^{-1}/G^{-1}$ to update weights. Default: 100. and other steps will use the stale A/G and :math:`A^{-1}/G^{-1}` to update weights. Default: 100.
Inputs: Inputs:
- **gradients** (tuple[Tensor]) - The gradients of `params`, the shape is the same as `params`. - **gradients** (tuple[Tensor]) - The gradients of `params`, the shape is the same as `params`.