forked from mindspore-Ecosystem/mindspore
!42643 modify api format in documents
Merge pull request !42643 from lvmingfu/code_docs_master0922
This commit is contained in:
commit
b835df60ce
|
@ -548,7 +548,7 @@ mindspore.numpy能够充分利用MindSpore的强大功能,实现算子的自
|
|||
|
||||
- `ms_function`: 将代码包裹进图模式,用于提高代码运行效率。
|
||||
- `GradOperation`: 用于自动求导。
|
||||
- `mindspore.context`: 用于设置运行模式和后端设备等。
|
||||
- `mindspore.set_context`: 用于设置运行模式和后端设备等。
|
||||
- `mindspore.nn.Cell`: 用于建立深度学习模型。
|
||||
|
||||
使用示例如下:
|
||||
|
@ -630,7 +630,7 @@ mindspore.numpy能够充分利用MindSpore的强大功能,实现算子的自
|
|||
...
|
||||
Tensor(shape=[4], dtype=Float32, value= [ 2.00000000e+00, 2.00000000e+00, 2.00000000e+00, 2.00000000e+00]))
|
||||
|
||||
如果要对 `ms_function` 修饰的 `forward` 计算求导,需要提前使用 `context` 设置运算模式为图模式,示例如下:
|
||||
如果要对 `ms_function` 修饰的 `forward` 计算求导,需要提前使用 `set_context` 设置运算模式为图模式,示例如下:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
|
@ -657,9 +657,9 @@ mindspore.numpy能够充分利用MindSpore的强大功能,实现算子的自
|
|||
|
||||
更多细节可参考 `API GradOperation <https://www.mindspore.cn/docs/zh-CN/master/api_python/ops/mindspore.ops.GradOperation.html>`_ 。
|
||||
|
||||
- mindspore.context使用示例
|
||||
- mindspore.set_context使用示例
|
||||
|
||||
MindSpore支持多后端运算,可以通过 `mindspore.context` 进行设置。`mindspore.numpy` 的多数算子可以使用图模式或者PyNative模式运行,也可以运行在CPU,CPU或者Ascend等多种后端设备上。
|
||||
MindSpore支持多后端运算,可以通过 `mindspore.set_context` 进行设置。`mindspore.numpy` 的多数算子可以使用图模式或者PyNative模式运行,也可以运行在CPU,CPU或者Ascend等多种后端设备上。
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
|
|
|
@ -15,7 +15,7 @@
|
|||
- **name** (str) - 参数的名称。默认值:None。如果一个网络中存在两个及以上相同名称的 `Parameter` 对象,在定义时将提示设置一个特有的名称。
|
||||
- **requires_grad** (bool) - 是否需要微分求梯度。默认值:True。
|
||||
- **layerwise_parallel** (bool) - 在数据/混合并行模式下,`layerwise_parallel` 配置为True时,参数广播和梯度聚合时会过滤掉该参数。默认值:False。
|
||||
- **parallel_optimizer** (bool) - 用于在 `semi_auto_parallel` 或 `auto_parallel` 并行模式下区分参数是否进行优化器切分。仅在 `mindspore.context.set_auto_parallel_context()` 并行配置模块中设置 `enable_parallel_optimizer` 启用优化器并行时有效。默认值:True。
|
||||
- **parallel_optimizer** (bool) - 用于在 `semi_auto_parallel` 或 `auto_parallel` 并行模式下区分参数是否进行优化器切分。仅在 `mindspore.set_auto_parallel_context()` 并行配置模块中设置 `enable_parallel_optimizer` 启用优化器并行时有效。默认值:True。
|
||||
|
||||
.. py:method:: cache_enable
|
||||
:property:
|
||||
|
@ -102,7 +102,7 @@
|
|||
|
||||
获取此参数的优化器并行状态(bool)。
|
||||
|
||||
用于在 `AUTO_PARALLEL` 和 `SEMI_AUTO_PARALLEL` 模式下过滤权重切分操作。当在 `mindspore.context.set_auto_parallel_context()` 中启用优化器并行时,它才有效。
|
||||
用于在 `AUTO_PARALLEL` 和 `SEMI_AUTO_PARALLEL` 模式下过滤权重切分操作。当在 `mindspore.set_auto_parallel_context()` 中启用优化器并行时,它才有效。
|
||||
|
||||
.. py:method:: parallel_optimizer_comm_recompute
|
||||
:property:
|
||||
|
|
|
@ -86,7 +86,7 @@ mindspore.set_context
|
|||
- **enable_dump** (bool) - 此参数已弃用,将在下一版本中删除。
|
||||
- **save_dump_path** (str) - 此参数已弃用,将在下一版本中删除。
|
||||
- **print_file_path** (str) - 该路径用于保存打印数据。使用时 :class:`mindspore.ops.Print` 可以打印输入的张量或字符串信息,使用方法 :func:`mindspore.parse_print` 解析保存的文件。如果设置了此参数,打印数据保存到文件,未设置将显示到屏幕。如果保存的文件已经存在,则将添加时间戳后缀到文件中。将数据保存到文件解决了屏幕打印中的数据丢失问题,如果未设置,将报告错误:"prompt to set the upper absolute path"。
|
||||
- **env_config_path** (str) - 通过 `context.set_context(env_config_path="./mindspore_config.json")` 来设置MindSpore环境配置文件路径。
|
||||
- **env_config_path** (str) - 通过 `mindspore.set_context(env_config_path="./mindspore_config.json")` 来设置MindSpore环境配置文件路径。
|
||||
|
||||
配置Running Data Recorder:
|
||||
|
||||
|
@ -109,7 +109,7 @@ mindspore.set_context
|
|||
- **pynative_synchronize** (bool) - 表示是否在PyNative模式下启动设备同步执行。默认值:False。设置为False时,将在设备上异步执行算子。当算子执行出错时,将无法定位特定错误脚本代码的位置。当设置为True时,将在设备上同步执行算子。这将降低程序的执行性能。此时,当算子执行出错时,可以根据错误的调用栈来定位错误脚本代码的位置。
|
||||
- **mode** (int) - 表示在GRAPH_MODE(0)或PYNATIVE_MODE(1)模式中的运行。默认值:GRAPH_MODE(0)。GRAPH_MODE或PYNATIVE_MODE可以通过 `mode` 属性设置,两种模式都支持所有后端。默认模式为GRAPH_MODE。
|
||||
- **enable_graph_kernel** (bool) - 表示开启图算融合去优化网络执行性能。默认值:False。如果 `enable_graph_kernel` 设置为True,则可以启用加速。有关图算融合的详细信息,请查看 `使能图算融合 <https://www.mindspore.cn/docs/zh-CN/master/design/graph_fusion_engine.html>`_ 。
|
||||
- **graph_kernel_flags** (str) - 图算融合的优化选项,当与enable_graph_kernel冲突时,它的优先级更高。其仅适用于有经验的用户。例如,context.set_context(graph_kernel_flags="--opt_level=2 --dump_as_text")。一些常用选项:
|
||||
- **graph_kernel_flags** (str) - 图算融合的优化选项,当与enable_graph_kernel冲突时,它的优先级更高。其仅适用于有经验的用户。例如,mindspore.set_context(graph_kernel_flags="--opt_level=2 --dump_as_text")。一些常用选项:
|
||||
|
||||
- **opt_level**:设置优化级别。默认值:2。当opt_level的值大于0时,启动图算融合。可选值包括:
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@ mindspore.ops.print\_
|
|||
|
||||
将输入数据进行打印输出。
|
||||
|
||||
默认打印在屏幕上。也可以保存在文件中,通过 `context` 设置 `print_file_path` 参数。一旦设置,输出将保存在指定文件中。通过函数 :func:`mindspore.parse_print` 可以重新加载数据。获取更多信息,请查看 :func:`mindspore.context.set_context` 和 :func:`mindspore.parse_print` 。
|
||||
默认打印在屏幕上。也可以保存在文件中,通过 `context` 设置 `print_file_path` 参数。一旦设置,输出将保存在指定文件中。通过函数 :func:`mindspore.parse_print` 可以重新加载数据。获取更多信息,请查看 :func:`mindspore.set_context` 和 :func:`mindspore.parse_print` 。
|
||||
|
||||
.. note::
|
||||
在PyNative模式下,请使用Python print函数。在Ascend平台上的Graph模式下,bool、int和float将被转换为Tensor进行打印,str保持不变。
|
||||
|
|
|
@ -553,7 +553,7 @@ Since `mindspore.numpy` directly wraps MindSpore tensors and operators, it has a
|
|||
|
||||
- `ms_function`: for running codes in static graph mode for better efficiency.
|
||||
- `GradOperation`: for automatic gradient computation.
|
||||
- `mindspore.context`: for `mindspore.numpy` execution management.
|
||||
- `mindspore.set_context`: for `mindspore.numpy` execution management.
|
||||
- `mindspore.nn.Cell`: for using `mindspore.numpy` interfaces in MindSpore Deep Learning Models.
|
||||
|
||||
The following are examples:
|
||||
|
@ -663,9 +663,9 @@ The following are examples:
|
|||
|
||||
For more details, see `API GradOperation <https://www.mindspore.cn/docs/en/master/api_python/ops/mindspore.ops.GradOperation.html>`_ .
|
||||
|
||||
- Use mindspore.context to control execution mode
|
||||
- Use mindspore.set_context to control execution mode
|
||||
|
||||
Most functions in `mindspore.numpy` can run in Graph Mode and PyNative Mode, and can run on CPU, GPU and Ascend. Like MindSpore, users can manage the execution mode using `mindspore.context`:
|
||||
Most functions in `mindspore.numpy` can run in Graph Mode and PyNative Mode, and can run on CPU, GPU and Ascend. Like MindSpore, users can manage the execution mode using `mindspore.set_context`:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
|
|
|
@ -145,7 +145,7 @@ class Parameter(Tensor_):
|
|||
layerwise_parallel (bool): When layerwise_parallel is true in data/hybrid parallel mode,
|
||||
broadcast and gradients communication would not be applied to parameters. Default: False.
|
||||
parallel_optimizer (bool): It is used to filter the weight shard operation in semi auto or auto parallel
|
||||
mode. It works only when enable parallel optimizer in `mindspore.context.set_auto_parallel_context()`.
|
||||
mode. It works only when enable parallel optimizer in `mindspore.set_auto_parallel_context()`.
|
||||
Default: True.
|
||||
|
||||
Examples:
|
||||
|
@ -509,7 +509,7 @@ class Parameter(Tensor_):
|
|||
Get the optimizer parallel status(bool) of the parameter.
|
||||
|
||||
It is used to filter the weight shard operation in `AUTO_PARALLEL` and `SEMI_AUTO_PARALLEL` mode. It works only
|
||||
when enable parallel optimizer in `mindspore.context.set_auto_parallel_context()`.
|
||||
when enable parallel optimizer in `mindspore.set_auto_parallel_context()`.
|
||||
"""
|
||||
return self.param_info.parallel_optimizer
|
||||
|
||||
|
|
|
@ -4760,8 +4760,8 @@ class Tensor(Tensor_):
|
|||
|
||||
Examples:
|
||||
>>> import numpy as np
|
||||
>>> from mindspore import Tensor, context
|
||||
>>> context.set_context(device_target="CPU")
|
||||
>>> from mindspore import Tensor, set_context
|
||||
>>> set_context(device_target="CPU")
|
||||
>>> a = Tensor(np.array([[1, 2], [-4, -5], [2, 1]]).astype(np.float32))
|
||||
>>> s, u, v = a.svd(full_matrices=True, compute_uv=True)
|
||||
>>> print(s)
|
||||
|
|
|
@ -463,7 +463,7 @@ def set_auto_parallel_context(**kwargs):
|
|||
Note:
|
||||
Attribute name is required for setting attributes.
|
||||
If a program has tasks on different parallel modes, before setting a new parallel mode for the
|
||||
next task, interface mindspore.context.reset_auto_parallel_context() should be called to reset
|
||||
next task, interface mindspore.reset_auto_parallel_context() should be called to reset
|
||||
the configuration.
|
||||
Setting or changing parallel modes must be called before creating any Initializer, otherwise,
|
||||
it may have RuntimeError when compiling the network.
|
||||
|
@ -549,7 +549,7 @@ def set_auto_parallel_context(**kwargs):
|
|||
configure. The configure provides more detailed behavior control about parallel training
|
||||
when parallel optimizer is enabled. Currently it supports the key `gradient_accumulation_shard`.
|
||||
The configure will be effective when we use
|
||||
context.set_auto_parallel_context(enable_parallel_optimizer=True).
|
||||
mindspore.set_auto_parallel_context(enable_parallel_optimizer=True).
|
||||
It supports the following keys.
|
||||
|
||||
- gradient_accumulation_shard(bool): If true, the accumulation gradient parameters will be
|
||||
|
@ -796,7 +796,7 @@ def set_context(**kwargs):
|
|||
solves the problem of data loss in screen printing when a large amount of data is generated.
|
||||
If it is not set, an error will be reported: prompt to set the upper absolute path.
|
||||
env_config_path (str): Config path for DFX.
|
||||
Through context.set_context(env_config_path="./mindspore_config.json")
|
||||
Through mindspore.set_context(env_config_path="./mindspore_config.json")
|
||||
|
||||
configure RDR:
|
||||
|
||||
|
@ -843,7 +843,7 @@ def set_context(**kwargs):
|
|||
graph_kernel_flags (str):
|
||||
Optimization options of graph kernel fusion, and the priority is higher when it conflicts
|
||||
with enable_graph_kernel. Only for experienced users.
|
||||
For example, context.set_context(graph_kernel_flags="--opt_level=2 --dump_as_text"). Some general options:
|
||||
For example, mindspore.set_context(graph_kernel_flags="--opt_level=2 --dump_as_text"). Some general options:
|
||||
|
||||
- opt_level: Set the optimization level.
|
||||
Default: 2. Graph kernel fusion can be enabled equivalently by setting opt_level greater than 0.
|
||||
|
|
|
@ -54,9 +54,9 @@ def svd(a, full_matrices=False, compute_uv=True):
|
|||
|
||||
Examples:
|
||||
>>> import numpy as np
|
||||
>>> from mindspore import Tensor, context
|
||||
>>> from mindspore import Tensor, set_context
|
||||
>>> from mindspore import ops
|
||||
>>> context.set_context(device_target="CPU")
|
||||
>>> set_context(device_target="CPU")
|
||||
>>> a = Tensor(np.array([[1, 2], [-4, -5], [2, 1]]).astype(np.float32))
|
||||
>>> s, u, v = ops.svd(a, full_matrices=True, compute_uv=True)
|
||||
>>> print(s)
|
||||
|
|
|
@ -95,9 +95,9 @@ class Svd(Primitive):
|
|||
|
||||
Examples:
|
||||
>>> import numpy as np
|
||||
>>> from mindspore import Tensor, context
|
||||
>>> from mindspore import Tensor, set_context
|
||||
>>> from mindspore.ops.operations import linalg_ops as linalg
|
||||
>>> context.set_context(device_target="CPU")
|
||||
>>> set_context(device_target="CPU")
|
||||
>>> svd = linalg.Svd(full_matrices=True, compute_uv=True)
|
||||
>>> a = Tensor(np.array([[1, 2], [-4, -5], [2, 1]]).astype(np.float32))
|
||||
>>> s, u, v = svd(a)
|
||||
|
|
Loading…
Reference in New Issue