modify urls adapt docs repo files structure

This commit is contained in:
lvmingfu 2022-04-12 18:25:42 +08:00
parent e26c0cee3b
commit 725eef4ceb
8 changed files with 12 additions and 12 deletions

View File

@ -99,7 +99,7 @@ MindSpore context用于配置当前执行环境包括执行模式、执行
内存重用:
- **mem_Reuse**表示内存复用功能是否打开。设置为True时将打开内存复用功能。设置为False时将关闭内存复用功能。
有关running data recoder和内存复用配置详细信息请查看 `配置RDR和内存复用 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/debug/custom_debugging_info.html>`_
有关running data recoder和内存复用配置详细信息请查看 `配置RDR和内存复用 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/debug/custom_debug.html>`_
- **precompile_only** (bool) - 表示是否仅预编译网络。默认值False。设置为True时仅编译网络而不执行网络。

View File

@ -3,7 +3,7 @@ mindspore.build_searched_strategy
.. py:class:: mindspore.build_searched_strategy(strategy_filename)
构建网络中每个参数的策略,用于分布式推理。关于它的使用细节,请参考: `保存和加载模型HyBrid Parallel模式 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/save_load_model_hybrid_parallel.html>`_
构建网络中每个参数的策略,用于分布式推理。关于它的使用细节,请参考: `保存和加载模型HyBrid Parallel模式 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/save_load.html>`_
**参数:**

View File

@ -3,7 +3,7 @@ mindspore.merge_sliced_parameter
.. py:method:: mindspore.merge_sliced_parameter(sliced_parameters, strategy=None)
将参数切片合并为一个完整的参数,用于分布式推理。关于它的细节,请参考:`保存和加载模型HyBrid Parallel模式 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/save_load_model_hybrid_parallel.html>`_
将参数切片合并为一个完整的参数,用于分布式推理。关于它的细节,请参考:`保存和加载模型HyBrid Parallel模式 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/save_load.html>`_
**参数:**

View File

@ -5,7 +5,7 @@
Callback函数可以在step或epoch开始前或结束后执行一些操作。
要创建自定义Callback需要继承Callback基类并重载它相应的方法有关自定义Callback的详细信息请查看
`Callback <https://www.mindspore.cn/tutorials/experts/zh-CN/master/debug/custom_debugging_info.html>`_
`Callback <https://www.mindspore.cn/tutorials/experts/zh-CN/master/debug/custom_debug.html>`_
.. py:method:: begin(run_context)

View File

@ -122,7 +122,7 @@ std::string GetOpDebugLevel() {
if (!TbeUtils::IsOneOf(value_ranges, std::stoul(env_level.c_str()))) {
MS_LOG(WARNING) << "Invalid environment variable '" << kCOMPILER_OP_LEVEL << "': " << env_level
<< ", the value should be in [0, 1, 2, 3, 4], now using the default value 3."
"Get more detail info at https://www.mindspore.cn/docs/note/zh-CN/master/env_var_list.html";
"Get more detail info at https://www.mindspore.cn/docs/zh-CN/master/note/env_var_list.html";
} else {
op_debug_level = env_level;
}

View File

@ -664,7 +664,7 @@ class NeighborExchange(Primitive):
This operator requires a full-mesh network topology, each device has the same vlan id, and the ip & mask are
in the same subnet, please check the `details \
<https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/distributed_training_ops.html#id2>`_.
<https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/communicate_ops.html#id2>`_.
Args:
send_rank_ids (list(int)): Ranks which the data is sent to.
@ -740,7 +740,7 @@ class AlltoAll(PrimitiveWithInfer):
This operator requires a full-mesh network topology, each device has the same vlan id, and the ip & mask are
in the same subnet, please check the `details \
<https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/distributed_training_ops.html#id2>`_.
<https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/communicate_ops.html#id2>`_.
Args:
split_count (int): On each process, divide blocks into split_count number.
@ -831,7 +831,7 @@ class NeighborExchangeV2(Primitive):
This operator requires a full-mesh network topology, each device has the same vlan id, and the ip & mask are
in the same subnet, please check the `details \
<https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/distributed_training_ops.html#id2>`_.
<https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/communicate_ops.html#id2>`_.
Args:
send_rank_ids (list(int)): Ranks which the data is sent to. 8 rank_ids represents 8 directions, if one

View File

@ -82,7 +82,7 @@ class Callback:
Callback function can perform some operations before and after step or epoch.
To create a custom callback, subclass Callback and override the method associated
with the stage of interest. For details of Callback fusion, please check
`Callback <https://www.mindspore.cn/tutorials/experts/en/master/debug/custom_debugging_info.html>`_.
`Callback <https://www.mindspore.cn/tutorials/experts/en/master/debug/custom_debug.html>`_.
Examples:
>>> import numpy as np
@ -249,7 +249,7 @@ class RunContext:
Callback objects can stop the loop by calling request_stop() of run_context.
This class needs to be used with :class:`mindspore.train.callback.Callback`.
For details of Callback fusion, please check
`Callback <https://www.mindspore.cn/tutorials/experts/en/master/debug/custom_debugging_info.html>`_.
`Callback <https://www.mindspore.cn/tutorials/experts/en/master/debug/custom_debug.html>`_.
Args:
original_args (dict): Holding the related information of model.

View File

@ -1396,7 +1396,7 @@ def build_searched_strategy(strategy_filename):
"""
Build strategy of every parameter in network. Used in the case of distributed inference.
For details of it, please check:
`<https://www.mindspore.cn/tutorials/experts/en/master/parallel/save_load_model_hybrid_parallel.html>`_.
`<https://www.mindspore.cn/tutorials/experts/en/master/parallel/save_load.html>`_.
Args:
strategy_filename (str): Name of strategy file.
@ -1447,7 +1447,7 @@ def merge_sliced_parameter(sliced_parameters, strategy=None):
"""
Merge parameter slices into one parameter. Used in the case of distributed inference.
For details of it, please check:
`<https://www.mindspore.cn/tutorials/experts/en/master/parallel/save_load_model_hybrid_parallel.html>`_.
`<https://www.mindspore.cn/tutorials/experts/en/master/parallel/save_load.html>`_.
Args:
sliced_parameters (list[Parameter]): Parameter slices in order of rank id.