forked from mindspore-Ecosystem/mindspore
Fix api description error in shard
This commit is contained in:
parent
52ddf7b4a5
commit
510e6a5778
|
@ -9,10 +9,11 @@ mindspore.shard
|
|||
|
||||
.. note::
|
||||
需设置执行模式为PyNative模式,同时设置 `set_auto_parallel_context` 中的并行模式为"auto_parallel"且搜索模式(search mode)为"sharding_propagation"。
|
||||
如果输入含有Parameter,其对应的策略应该在 `in_strategy` 里设置。
|
||||
如果你想了解更多关于shard的信息,可以参考 `函数式算子切分 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/pynative_shard_function_parallel.html>`_ 。
|
||||
|
||||
参数:
|
||||
- **fn** (Union[Cell, Function]) - 待通过分布式并行执行的函数,它的参数和返回值类型应该均为Tensor。
|
||||
- **fn** (Union[Cell, Function]) - 待通过分布式并行执行的函数,它的参数和返回值类型应该均为Tensor或Parameter。
|
||||
如果fn是Cell类型且含有参数,则fn必须是一个实例化的对象,否则无法访问到其内部参数。
|
||||
- **in_strategy** (tuple) - 指定各输入的切分策略,输入元组的每个元素可以为元组或None,元组即具体指定输入每一维的切分策略,None则会默认以数据并行执行。
|
||||
- **out_strategy** (Union[tuple, None]) - 指定各输出的切分策略,用法同 `in_strategy`,目前未使能。默认值:None。
|
||||
|
|
|
@ -529,7 +529,9 @@
|
|||
其中的每一个元素指定对应的输入/输出的Tensor分布策略,可参考: `mindspore.ops.Primitive.shard` 的描述。也可以设置为None,会默认以数据并行执行。
|
||||
其余算子的并行策略由输入输出指定的策略推导得到。
|
||||
|
||||
.. note:: 需设置为PyNative模式,并且ParallelMode.AUTO_PARALLEL,同时设置 `set_auto_parallel_context` 中的搜索模式(search mode)为"sharding_propagation"。
|
||||
.. note:: 需设置为PyNative模式,并且ParallelMode.AUTO_PARALLEL,
|
||||
同时设置 `set_auto_parallel_context` 中的搜索模式(search mode)为"sharding_propagation"。
|
||||
如果输入含有Parameter,其对应的策略应该在 `in_strategy` 里设置。
|
||||
|
||||
参数:
|
||||
- **in_strategy** (tuple) - 指定各输入的切分策略,输入元组的每个元素可以为元组或None,元组即具体指定输入每一维的切分策略,None则会默认以数据并行执行。
|
||||
|
|
|
@ -501,6 +501,7 @@ class Cell(Cell_):
|
|||
Note:
|
||||
Only effective in PYNATIVE_MODE and in either ParallelMode.AUTO_PARALLEL with
|
||||
search_mode in auto_parallel_context set as sharding_propagation.
|
||||
If the input contain Parameter, its strategy should be set in `in_strategy`.
|
||||
|
||||
Args:
|
||||
in_strategy (tuple): Define the layout of inputs, each element of the tuple should be a tuple or None. Tuple
|
||||
|
|
|
@ -132,10 +132,11 @@ def shard(fn, in_strategy, out_strategy=None, parameter_plan=None, device="Ascen
|
|||
You need to set the execution mode to PyNative mode,
|
||||
set the parallel mode in `set_auto_parallel_context` to "auto_parallel"
|
||||
and the search mode to "sharding_propagation".
|
||||
If the input contain Parameter, its strategy should be set in `in_strategy`.
|
||||
|
||||
Args:
|
||||
fn (Union[Cell, Function]): Function to be executed in parallel.
|
||||
Its arguments and return value must be Tensor.
|
||||
Its arguments and return value must be Tensor or Parameter.
|
||||
If fn is a Cell with parameters, fn needs to be an instantiated object,
|
||||
otherwise its arguments cannot be accessed.
|
||||
in_strategy (tuple): Define the layout of inputs, each element of the tuple should be a tuple or None.
|
||||
|
|
Loading…
Reference in New Issue