forked from mindspore-Ecosystem/mindspore
modify urls adapt docs repo files structure
This commit is contained in:
parent
e26c0cee3b
commit
725eef4ceb
|
@ -99,7 +99,7 @@ MindSpore context,用于配置当前执行环境,包括执行模式、执行
|
|||
内存重用:
|
||||
|
||||
- **mem_Reuse**:表示内存复用功能是否打开。设置为True时,将打开内存复用功能。设置为False时,将关闭内存复用功能。
|
||||
有关running data recoder和内存复用配置详细信息,请查看 `配置RDR和内存复用 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/debug/custom_debugging_info.html>`_。
|
||||
有关running data recoder和内存复用配置详细信息,请查看 `配置RDR和内存复用 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/debug/custom_debug.html>`_。
|
||||
|
||||
|
||||
- **precompile_only** (bool) - 表示是否仅预编译网络。默认值:False。设置为True时,仅编译网络,而不执行网络。
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.build_searched_strategy
|
|||
|
||||
.. py:class:: mindspore.build_searched_strategy(strategy_filename)
|
||||
|
||||
构建网络中每个参数的策略,用于分布式推理。关于它的使用细节,请参考: `保存和加载模型(HyBrid Parallel模式) <https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/save_load_model_hybrid_parallel.html>`_。
|
||||
构建网络中每个参数的策略,用于分布式推理。关于它的使用细节,请参考: `保存和加载模型(HyBrid Parallel模式) <https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/save_load.html>`_。
|
||||
|
||||
**参数:**
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.merge_sliced_parameter
|
|||
|
||||
.. py:method:: mindspore.merge_sliced_parameter(sliced_parameters, strategy=None)
|
||||
|
||||
将参数切片合并为一个完整的参数,用于分布式推理。关于它的细节,请参考:`保存和加载模型(HyBrid Parallel模式) <https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/save_load_model_hybrid_parallel.html>`_。
|
||||
将参数切片合并为一个完整的参数,用于分布式推理。关于它的细节,请参考:`保存和加载模型(HyBrid Parallel模式) <https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/save_load.html>`_。
|
||||
|
||||
**参数:**
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
|
||||
Callback函数可以在step或epoch开始前或结束后执行一些操作。
|
||||
要创建自定义Callback,需要继承Callback基类并重载它相应的方法,有关自定义Callback的详细信息,请查看
|
||||
`Callback <https://www.mindspore.cn/tutorials/experts/zh-CN/master/debug/custom_debugging_info.html>`_。
|
||||
`Callback <https://www.mindspore.cn/tutorials/experts/zh-CN/master/debug/custom_debug.html>`_。
|
||||
|
||||
.. py:method:: begin(run_context)
|
||||
|
||||
|
|
|
@ -122,7 +122,7 @@ std::string GetOpDebugLevel() {
|
|||
if (!TbeUtils::IsOneOf(value_ranges, std::stoul(env_level.c_str()))) {
|
||||
MS_LOG(WARNING) << "Invalid environment variable '" << kCOMPILER_OP_LEVEL << "': " << env_level
|
||||
<< ", the value should be in [0, 1, 2, 3, 4], now using the default value 3."
|
||||
"Get more detail info at https://www.mindspore.cn/docs/note/zh-CN/master/env_var_list.html";
|
||||
"Get more detail info at https://www.mindspore.cn/docs/zh-CN/master/note/env_var_list.html";
|
||||
} else {
|
||||
op_debug_level = env_level;
|
||||
}
|
||||
|
|
|
@ -664,7 +664,7 @@ class NeighborExchange(Primitive):
|
|||
|
||||
This operator requires a full-mesh network topology, each device has the same vlan id, and the ip & mask are
|
||||
in the same subnet, please check the `details \
|
||||
<https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/distributed_training_ops.html#id2>`_.
|
||||
<https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/communicate_ops.html#id2>`_.
|
||||
|
||||
Args:
|
||||
send_rank_ids (list(int)): Ranks which the data is sent to.
|
||||
|
@ -740,7 +740,7 @@ class AlltoAll(PrimitiveWithInfer):
|
|||
|
||||
This operator requires a full-mesh network topology, each device has the same vlan id, and the ip & mask are
|
||||
in the same subnet, please check the `details \
|
||||
<https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/distributed_training_ops.html#id2>`_.
|
||||
<https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/communicate_ops.html#id2>`_.
|
||||
|
||||
Args:
|
||||
split_count (int): On each process, divide blocks into split_count number.
|
||||
|
@ -831,7 +831,7 @@ class NeighborExchangeV2(Primitive):
|
|||
|
||||
This operator requires a full-mesh network topology, each device has the same vlan id, and the ip & mask are
|
||||
in the same subnet, please check the `details \
|
||||
<https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/distributed_training_ops.html#id2>`_.
|
||||
<https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/communicate_ops.html#id2>`_.
|
||||
|
||||
Args:
|
||||
send_rank_ids (list(int)): Ranks which the data is sent to. 8 rank_ids represents 8 directions, if one
|
||||
|
|
|
@ -82,7 +82,7 @@ class Callback:
|
|||
Callback function can perform some operations before and after step or epoch.
|
||||
To create a custom callback, subclass Callback and override the method associated
|
||||
with the stage of interest. For details of Callback fusion, please check
|
||||
`Callback <https://www.mindspore.cn/tutorials/experts/en/master/debug/custom_debugging_info.html>`_.
|
||||
`Callback <https://www.mindspore.cn/tutorials/experts/en/master/debug/custom_debug.html>`_.
|
||||
|
||||
Examples:
|
||||
>>> import numpy as np
|
||||
|
@ -249,7 +249,7 @@ class RunContext:
|
|||
Callback objects can stop the loop by calling request_stop() of run_context.
|
||||
This class needs to be used with :class:`mindspore.train.callback.Callback`.
|
||||
For details of Callback fusion, please check
|
||||
`Callback <https://www.mindspore.cn/tutorials/experts/en/master/debug/custom_debugging_info.html>`_.
|
||||
`Callback <https://www.mindspore.cn/tutorials/experts/en/master/debug/custom_debug.html>`_.
|
||||
|
||||
Args:
|
||||
original_args (dict): Holding the related information of model.
|
||||
|
|
|
@ -1396,7 +1396,7 @@ def build_searched_strategy(strategy_filename):
|
|||
"""
|
||||
Build strategy of every parameter in network. Used in the case of distributed inference.
|
||||
For details of it, please check:
|
||||
`<https://www.mindspore.cn/tutorials/experts/en/master/parallel/save_load_model_hybrid_parallel.html>`_.
|
||||
`<https://www.mindspore.cn/tutorials/experts/en/master/parallel/save_load.html>`_.
|
||||
|
||||
Args:
|
||||
strategy_filename (str): Name of strategy file.
|
||||
|
@ -1447,7 +1447,7 @@ def merge_sliced_parameter(sliced_parameters, strategy=None):
|
|||
"""
|
||||
Merge parameter slices into one parameter. Used in the case of distributed inference.
|
||||
For details of it, please check:
|
||||
`<https://www.mindspore.cn/tutorials/experts/en/master/parallel/save_load_model_hybrid_parallel.html>`_.
|
||||
`<https://www.mindspore.cn/tutorials/experts/en/master/parallel/save_load.html>`_.
|
||||
|
||||
Args:
|
||||
sliced_parameters (list[Parameter]): Parameter slices in order of rank id.
|
||||
|
|
Loading…
Reference in New Issue