correct the errors on webpage
This commit is contained in:
parent
756f1e4249
commit
4950409570
|
@ -16,7 +16,7 @@ mindspore.COOTensor
|
|||
[0, 0, 0, 0]]
|
||||
|
||||
.. note::
|
||||
这是一个实验特性,在未来可能会发生API的变化。目前COOTensor中相同索引的值不会进行合并。
|
||||
这是一个实验特性,在未来可能会发生API的变化。目前COOTensor中相同索引的值不会进行合并。如果索引中包含界外值,则得出未定义结果。
|
||||
|
||||
参数:
|
||||
- **indices** (Tensor) - 形状为 `[N, ndims]` 的二维整数张量,其中N和ndims分别表示稀疏张量中 `values` 的数量和COOTensor维度的数量。目前 `ndims` 只能为2。请确保indices的值在所给shape范围内。
|
||||
|
|
|
@ -133,7 +133,7 @@ mindspore.set_context
|
|||
- **compile_cache_path** (str) - 保存前端图编译缓存的路径。默认值:"."。如果目录不存在,系统会自动创建这个目录。缓存会被保存到如下目录: `compile_cache_path/rank_${rank_id}/` 。 `rank_id` 是集群上当前设备的ID。
|
||||
- **runtime_num_threads** (int) - 运行时actor和CPU算子核使用的线程池线程数,必须大于0。默认值为30,如果同时运行多个进程,应将该值设置得小一些,以避免线程争用。
|
||||
- **disable_format_transform** (bool) - 表示是否取消NCHW到NHWC的自动格式转换功能。当fp16的网络性能不如fp32的时,可以设置 `disable_format_transform` 为True,以尝试提高训练性能。默认值:False。
|
||||
- **support_binary** (bool) - 是否支持在图形模式下运行.pyc或.so。如果要支持在图形模式下运行.so或.pyc,可将`support_binary`置为True,并运行一次.py文件,从而将接口源码保存到接口定义.py文件中,因此要保证该文件可写。然后将.py文件编译成.pyc或.so文件,即可在图模式下运行。
|
||||
- **support_binary** (bool) - 是否支持在图形模式下运行.pyc或.so。如果要支持在图形模式下运行.so或.pyc,可将 `support_binary` 置为True,并运行一次.py文件,从而将接口源码保存到接口定义.py文件中,因此要保证该文件可写。然后将.py文件编译成.pyc或.so文件,即可在图模式下运行。
|
||||
|
||||
异常:
|
||||
- **ValueError** - 输入key不是上下文中的属性。
|
||||
|
|
|
@ -11,7 +11,7 @@ mindspore.ops.amax
|
|||
- **keep_dims** (bool) - 如果为True,则保留缩小的维度,大小为1。否则移除维度。默认值:False。
|
||||
|
||||
返回:
|
||||
Tensor。
|
||||
Tensor。
|
||||
|
||||
- 如果 `axis` 为(),且 `keep_dims` 为False,则输出一个0维Tensor,表示输入Tensor中所有元素的最大值。
|
||||
- 如果 `axis` 为int,取值为1,并且 `keep_dims` 为False,则输出的shape为 :math:`(x_0, x_2, ..., x_R)` 。
|
||||
|
|
|
@ -10,9 +10,6 @@
|
|||
.. math::
|
||||
(x_1, x_2, ..., \sum_{i=1}^Nx_{mi}, ..., x_R)
|
||||
|
||||
.. note::
|
||||
`axis` 的取值范围为 :math:`[-dims, dims - 1]` 。 `dims` 为 `input_x` 的维度长度。
|
||||
|
||||
参数:
|
||||
- **input_x** (tuple, list) - 输入为Tensor组成的tuple或list。假设在这个tuple或list中有两个Tensor,即 `x1` 和 `x2` 。要在0轴方向上执行 `Concat` ,除0轴外,其他轴的shape都应相等,即 :math:`x1.shape[1] = x2.shape[1],x1.shape[2] = x2.shape[2],...,x1.shape[R] = x2.shape[R]` ,其中 :math:`R` 表示最后一个轴。
|
||||
- **axis** (int) - 表示指定的轴,取值范围是 :math:`[-R, R)` 。默认值:0。
|
||||
|
|
|
@ -3,17 +3,17 @@ mindspore.ops.csr_softmax
|
|||
|
||||
.. py:function:: mindspore.ops.csr_softmax(logits, dtype)
|
||||
|
||||
计算 CSRTensorMatrix 的 softmax 。
|
||||
计算 CSRTensorMatrix 的 softmax 。
|
||||
|
||||
参数:
|
||||
- **logits** (CSRTensor) - 输入稀疏的 CSRTensor。
|
||||
- **dtype** (dtype) - 输入的数据类型。
|
||||
参数:
|
||||
- **logits** (CSRTensor) - 输入稀疏的 CSRTensor。
|
||||
- **dtype** (dtype) - 输入的数据类型。
|
||||
|
||||
返回:
|
||||
- **CSRTensor** (CSRTensor) - 一个 csr_tensor 包含
|
||||
|
||||
- **indptr** - 指示每行中非零值的起始点和结束点。
|
||||
- **indices** - 输入中所有非零值的列位置。
|
||||
- **values** - 稠密张量的非零值。
|
||||
- **shape** - csrtensor 的形状.
|
||||
返回:
|
||||
- **CSRTensor** (CSRTensor) - 一个 csr_tensor 包含
|
||||
|
||||
- **indptr** - 指示每行中非零值的起始点和结束点。
|
||||
- **indices** - 输入中所有非零值的列位置。
|
||||
- **values** - 稠密张量的非零值。
|
||||
- **shape** - csrtensor 的形状.
|
||||
|
|
@ -9,7 +9,7 @@ mindspore.ops.grad
|
|||
|
||||
1. 对输入求导,此时 `grad_position` 非None,而 `weights` 是None;
|
||||
2. 对网络变量求导,此时 `grad_position` 是None,而 `weights` 非None;
|
||||
3. 同时对输入和网络变量求导,此时 `grad_position`和 `weights` 都非None。
|
||||
3. 同时对输入和网络变量求导,此时 `grad_position` 和 `weights` 都非None。
|
||||
|
||||
参数:
|
||||
- **fn** (Union[Cell, Function]) - 待求导的函数或网络。
|
||||
|
|
|
@ -16,7 +16,7 @@ mindspore.ops.log_softmax
|
|||
- **logits** (Tensor) - shape: :math:`(N, *)` ,其中 :math:`*` 表示任意数量的附加维度,其数据类型为float16或float32。
|
||||
- **axis** (int) - 指定进行运算的轴。默认值:-1。
|
||||
|
||||
输出:
|
||||
返回:
|
||||
Tensor,数据类型和shape与 `logits` 相同。
|
||||
|
||||
异常:
|
||||
|
|
|
@ -11,7 +11,7 @@ mindspore.ops.prod
|
|||
- **keep_dims** (bool) - 如果为True,则保留缩小的维度,大小为1。否则移除维度。默认值:False。
|
||||
|
||||
返回:
|
||||
Tensor。
|
||||
Tensor。
|
||||
|
||||
- 如果 `axis` 为(),且 `keep_dims` 为False,则输出一个0维Tensor,表示输入Tensor中所有元素的乘积。
|
||||
- 如果 `axis` 为int,取值为1,并且 `keep_dims` 为False,则输出的shape为 :math:`(x_0, x_2, ..., x_R)` 。
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
mindspore.ops.softmax
|
||||
=====================
|
||||
|
||||
.. py::: function.ops.softmax(x, axis=-1)
|
||||
.. py:function:: mindspore.ops.softmax(x, axis=-1)
|
||||
|
||||
Softmax函数。
|
||||
|
||||
|
|
|
@ -686,7 +686,7 @@ def get_auto_offload():
|
|||
Returns:
|
||||
bool, Whether the automatic offload feature is enabled.
|
||||
|
||||
Example:
|
||||
Examples:
|
||||
>>> # Get the global configuration of the automatic offload feature.
|
||||
>>> auto_offload = ds.config.get_auto_offload()
|
||||
"""
|
||||
|
|
|
@ -1061,7 +1061,8 @@ def slice(input_x, begin, size):
|
|||
The slice `begin` represents the offset in each dimension of `input_x`,
|
||||
The slice `size` represents the size of the output tensor.
|
||||
|
||||
Note that `begin` is zero-based and `size` is one-based.
|
||||
Note:
|
||||
`begin` is zero-based and `size` is one-based.
|
||||
|
||||
If `size[i]` is -1, all remaining elements in dimension i are included in the slice.
|
||||
This is equivalent to setting :math:`size[i] = input_x.shape(i) - begin[i]`.
|
||||
|
|
|
@ -2662,11 +2662,6 @@ def equal(x, y):
|
|||
r"""
|
||||
Computes the equivalence between two tensors element-wise.
|
||||
|
||||
Inputs of `x` and `y` comply with the implicit type conversion rules to make the data types consistent.
|
||||
The inputs must be two tensors or one tensor and one scalar.
|
||||
When the inputs are two tensors, the shapes of them could be broadcast.
|
||||
When the inputs are one tensor and one scalar, the scalar could only be a constant.
|
||||
|
||||
.. math::
|
||||
|
||||
out_{i} =\begin{cases}
|
||||
|
@ -2674,6 +2669,12 @@ def equal(x, y):
|
|||
& \text{False, if } x_{i} \ne y_{i}
|
||||
\end{cases}
|
||||
|
||||
Note:
|
||||
- Inputs of `x` and `y` comply with the implicit type conversion rules to make the data types consistent.
|
||||
- The inputs must be two tensors or one tensor and one scalar.
|
||||
- When the inputs are two tensors, the shapes of them could be broadcast.
|
||||
- When the inputs are one tensor and one scalar, the scalar could only be a constant.
|
||||
|
||||
Args:
|
||||
x (Union[Tensor, Number]): The first input is a number or
|
||||
a tensor whose data type is number.
|
||||
|
@ -3036,12 +3037,13 @@ def minimum(x, y):
|
|||
r"""
|
||||
Computes the minimum of input tensors element-wise.
|
||||
|
||||
Inputs of `x` and `y` comply with the implicit type conversion rules to make the data types consistent.
|
||||
The inputs must be two tensors or one tensor and one scalar.
|
||||
When the inputs are two tensors, dtypes of them cannot be bool at the same time.
|
||||
When the inputs are one tensor and one scalar, the scalar could only be a constant.
|
||||
Shapes of them are supposed to be broadcast.
|
||||
If one of the elements being compared is a NaN, then that element is returned.
|
||||
Note:
|
||||
- Inputs of `x` and `y` comply with the implicit type conversion rules to make the data types consistent.
|
||||
- The inputs must be two tensors or one tensor and one scalar.
|
||||
- When the inputs are two tensors, dtypes of them cannot be bool at the same time.
|
||||
- When the inputs are one tensor and one scalar, the scalar could only be a constant.
|
||||
- Shapes of them are supposed to be broadcast.
|
||||
- If one of the elements being compared is a NaN, then that element is returned.
|
||||
|
||||
.. math::
|
||||
output_i = min(x_i, y_i)
|
||||
|
|
|
@ -347,7 +347,7 @@ def random_poisson(shape, rate, seed=None, dtype=mstype.float32):
|
|||
mindspore.dtype.float64, mindspore.dtype.float32 or mindspore.dtype.float16. Default: mindspore.dtype.float32.
|
||||
|
||||
Returns:
|
||||
A Tensor whose shape is `mindspore.concat([`shape`, mindspore.shape(`rate`)], axis=0)` and data type is equal to
|
||||
A Tensor whose shape is `mindspore.concat(['shape', mindspore.shape('rate')], axis=0)` and data type is equal to
|
||||
argument `dtype`.
|
||||
|
||||
Raises:
|
||||
|
|
|
@ -634,6 +634,7 @@ class ReduceMean(_Reduce):
|
|||
TypeError: If `keep_dims` is not a bool.
|
||||
TypeError: If `x` is not a Tensor.
|
||||
TypeError: If `axis` is not one of the following: int, tuple or list.
|
||||
ValueError: If `axis` is out of range.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
@ -918,6 +919,7 @@ class ReduceMax(_Reduce):
|
|||
TypeError: If `keep_dims` is not a bool.
|
||||
TypeError: If `x` is not a Tensor.
|
||||
TypeError: If `axis` is not one of the following: int, tuple or list.
|
||||
ValueError: If `axis` is out of range.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
@ -1004,6 +1006,7 @@ class ReduceMin(_Reduce):
|
|||
TypeError: If `keep_dims` is not a bool.
|
||||
TypeError: If `x` is not a Tensor.
|
||||
TypeError: If `axis` is not one of the following: int, tuple or list.
|
||||
ValueError: If `axis` is out of range.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
@ -1126,6 +1129,7 @@ class ReduceProd(_Reduce):
|
|||
TypeError: If `keep_dims` is not a bool.
|
||||
TypeError: If `x` is not a Tensor.
|
||||
TypeError: If `axis` is not one of the following: int, tuple or list.
|
||||
ValueError: If `axis` is out of range.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
|
Loading…
Reference in New Issue