fix doc string

This commit is contained in:
zhujingxuan 2021-12-20 20:37:38 +08:00
parent 76aaf62088
commit 646d1c6114
7 changed files with 66 additions and 65 deletions

View File

@ -31,21 +31,21 @@ def block_diag(*arrs):
Create a block diagonal matrix from provided arrays.
Given the inputs `A`, `B` and `C`, the output will have these
Tensor arranged on the diagonal::
Tensors arranged on the diagonal::
[[A, 0, 0],
[0, B, 0],
[0, 0, C]]
Args:
A, B, C, ... (Tensor): up to 2-D
Input Tensors. A 1-D Tensor or a 2-D Tensor with shape ``(1,n)``.
`A`, `B`, `C`, ... (Tensor): up to 2-D Input Tensors.
A 1-D Tensor or a 2-D Tensor with shape :math:`(1,n)`.
Returns:
Tensor with `A`, `B`, `C`, ... on the diagonal which has the same dtype as `A`.
Raises:
ValueError: If there are tensors with dimensions higher than 2 in all arguments.
ValueError: If there are Tensors with dimensions higher than 2 in all arguments.
Supported Platforms:
``CPU`` ``GPU``
@ -97,7 +97,7 @@ def solve_triangular(A, b, trans=0, lower=False, unit_diagonal=False,
Args:
A (Tensor): A triangular matrix of shape :math:`(N, N)`.
b (Tensor): A tensor of shape :math:`(M,)` or :math:`(M, N)`.
b (Tensor): A Tensor of shape :math:`(M,)` or :math:`(M, N)`.
Right-hand side matrix in :math:`A x = b`.
lower (bool, optional): Use only data contained in the lower triangle of `a`.
Default is to use upper triangle.
@ -321,31 +321,31 @@ def eigh(a, b=None, lower=True, eigvals_only=False, overwrite_a=False,
Solve a standard or generalized eigenvalue problem for a complex
Hermitian or real symmetric matrix.
Find eigenvalues Tensor ``w`` and optionally eigenvectors Tensor ``v`` of Tensor ``a``,
where ``b`` is positive definite such that for every eigenvalue λ (i-th entry of w) and
its eigenvector ``vi`` (i-th column of``v``) satisfies::
Find eigenvalues Tensor `w` and optionally eigenvectors Tensor `v` of Tensor `a`,
where `b` is positive definite such that for every eigenvalue `λ` (i-th entry of w) and
its eigenvector `vi` (i-th column of `v`) satisfies:
a @ vi = λ * b @ vi
vi.conj().T @ a @ vi = λ
vi.conj().T @ b @ vi = 1
In the standard problem, ``b`` is assumed to be the identity matrix.
In the standard problem, `b` is assumed to be the identity matrix.
Args:
a (Tensor): A (M, M) complex Hermitian or real symmetric matrix whose eigenvalues and
a (Tensor): A :math:`(M, M)` complex Hermitian or real symmetric matrix whose eigenvalues and
eigenvectors will be computed.
b (Tensor, optional): A (M, M) complex Hermitian or real symmetric definite positive matrix in.
b (Tensor, optional): A :math:`(M, M)` complex Hermitian or real symmetric definite positive matrix in.
If omitted, identity matrix is assumed.
lower (bool, optional): Whether the pertinent Tensor data is taken from the lower or upper
triangle of ``a`` and, if applicable, ``b``. (Default: lower)
triangle of `a` and, if applicable, `b`. Default: True.
eigvals_only (bool, optional): Whether to calculate only eigenvalues and no eigenvectors.
(Default: both are calculated)
Default: False.
_type (int, optional): For the generalized problems, this keyword specifies the problem type
to be solved for ``w`` and ``v`` (only takes 1, 2, 3 as possible inputs)::
to be solved for `w` and `v` (only takes 1, 2, 3 as possible inputs):
1 => a @ v = w @ b @ v
2 => a @ b @ v = w @ v
3 => b @ a @ v = w @ v
This keyword is ignored for standard problems.
overwrite_a (bool, optional): Whether to overwrite data in ``a`` (may improve performance). Default is False.
overwrite_b (bool, optional): Whether to overwrite data in ``b`` (may improve performance). Default is False.
overwrite_a (bool, optional): Whether to overwrite data in `a` (may improve performance). Default: False.
overwrite_b (bool, optional): Whether to overwrite data in `b` (may improve performance). Default is False.
check_finite (bool, optional): Whether to check that the input matrices contain only finite numbers.
Disabling may give a performance gain, but may result in problems (crashes, non-termination)
if the inputs do contain infinities or NaNs.
@ -353,13 +353,13 @@ def eigh(a, b=None, lower=True, eigvals_only=False, overwrite_a=False,
for generalized eigenvalue problem and if full set of eigenvalues are requested.).
Has no significant effect if eigenvectors are not requested.
eigvals (tuple, optional): Indexes of the smallest and largest (in ascending order) eigenvalues
and corresponding eigenvectors to be returned: 0 <= lo <= hi <= M-1. If omitted, all eigenvalues
and corresponding eigenvectors to be returned: :math:`0 <= lo <= hi <= M-1`. If omitted, all eigenvalues
and eigenvectors are returned.
Returns:
- Tensor with shape (N,), The N (1<=N<=M) selected eigenvalues, in ascending order,
- Tensor with shape :math:`(N,)`, The :math:`N (1<=N<=M)` selected eigenvalues, in ascending order,
each repeated according to its multiplicity.
- Tensor with shape (M, N), (if ``eigvals_only == False``)
- Tensor with shape :math:`(M, N)`, (if :math:`eigvals_only == False`)
Raises:
LinAlgError: If eigenvalue computation does not converge, an error occurred, or b matrix is not
@ -445,20 +445,21 @@ def lu_factor(a, overwrite_a=False, check_finite=True):
"""
Compute pivoted LU decomposition of a matrix.
The decomposition is::
.. math::
A = P L U
where P is a permutation matrix, L lower triangular with unit diagonal elements, and U upper triangular.
Args:
a (Tensor): square matrix of (M, M) to decompose
overwrite_a (bool, optional): Whether to overwrite data in A (may increase performance)
a (Tensor): square matrix of :math:`(M, M)` to decompose.
overwrite_a (bool, optional): Whether to overwrite data in `A` (may increase performance).
check_finite (bool, optional): Whether to check that the input matrix contains only finite numbers.
Disabling may give a performance gain, but may result in problems
(crashes, non-termination) if the inputs do contain infinities or NaNs.
Returns:
Tensor, a square matrix of (N, N) containing U in its upper triangle, and L in its lower triangle.
The unit diagonal elements of L are not stored.
Tensor, (N,) Pivot indices representing the permutation matrix P:
Tensor, a square matrix of :math:`(N, N)` containing `U` in its upper triangle, and `L` in its lower triangle.
The unit diagonal elements of `L` are not stored.
Tensor, :math:`(N,)` Pivot indices representing the permutation matrix `P`:
row i of matrix was interchanged with row piv[i].
Supported Platforms:
@ -496,8 +497,8 @@ def lu(a, permute_l=False, overwrite_a=False, check_finite=True):
diagonal elements, and U upper triangular.
Args:
a (Tensor): a (M, N) matrix to decompose.
permute_l (bool, optional): Perform the multiplication P*L (Default: do not permute).
a (Tensor): a :math:`(M, N)` matrix to decompose.
permute_l (bool, optional): Perform the multiplication :math:`P*L` (Default: do not permute).
overwrite_a (bool, optional): Whether to overwrite data in a (may improve performance).
check_finite (bool, optional): Whether to check that the input matrix contains
only finite numbers. Disabling may give a performance gain, but may result
@ -506,14 +507,14 @@ def lu(a, permute_l=False, overwrite_a=False, check_finite=True):
Returns:
**(If permute_l == False)**
- Tensor, (M, M) Permutation matrix.
- Tensor, (M, K) Lower triangular or trapezoidal matrix with unit diagonal. K = min(M, N).
- Tensor, (K, N) Upper triangular or trapezoidal matrix.
- Tensor, :math:`(M, M)` Permutation matrix.
- Tensor, :math:`(M, K)` Lower triangular or trapezoidal matrix with unit diagonal. :math:`K = min(M, N)`.
- Tensor, :math:`(K, N)` Upper triangular or trapezoidal matrix.
**(If permute_l == True)**
- Tensor, (M, K) Permuted L matrix. K = min(M, N).
- Tensor, (K, N) Upper triangular or trapezoidal matrix.
- Tensor, :math:`(M, K)` Permuted L matrix. :math:`K = min(M, N)`.
- Tensor, :math:`(K, N)` Upper triangular or trapezoidal matrix.
Supported Platforms:
``CPU`` ``GPU``

View File

@ -40,10 +40,10 @@ class SolveTriangular(PrimitiveWithInfer):
Inputs:
- **A** (Tensor) - A triangular matrix of shape :math:`(N, N)`.
- **b** (Tensor) - A tensor of shape :math:`(M,)` or :math:`(M, N)`. Right-hand side matrix in :math:`A x = b`.
- **b** (Tensor) - A Tensor of shape :math:`(M,)` or :math:`(M, N)`. Right-hand side matrix in :math:`A x = b`.
Returns:
- **x** (Tensor) - A tensor of shape :math:`(M,)` or :math:`(M, N)`,
- **x** (Tensor) - A Tensor of shape :math:`(M,)` or :math:`(M, N)`,
which is the solution to the system :math:`A x = b`.
Shape of :math:`x` matches :math:`b`.
@ -150,7 +150,7 @@ class CholeskySolver(PrimitiveWithInfer):
Inputs:
- **A** (Tensor) - A matrix of shape :math:`(M, M)` to be decomposed.
- **b** (Tensor) - A tensor of shape :math:`(M,)` or :math:`(..., M)`.
- **b** (Tensor) - A Tensor of shape :math:`(M,)` or :math:`(..., M)`.
Right-hand side matrix in :math:`A x = b`.
Returns
-------

View File

@ -41,7 +41,7 @@ class _BFGSResults(NamedTuple):
search converged, then this is the (local) minimum of the objective
function.
g_k (Tensor): containing the gradient of the objective function at `x_k`. If
the search converged the l2-norm of this tensor should be below the
the search converged the l2-norm of this Tensor should be below the
tolerance.
H_k (Tensor): containing the inverse of the estimated Hessian.
old_old_fval (float): Function value for the point preceding x=x_k.

View File

@ -314,10 +314,10 @@ def line_search(f, xk, pk, gfk=None, old_fval=None, old_old_fval=None, c1=1e-4,
xk (Tensor): initial guess.
pk (Tensor): direction to search in. Assumes the direction is a descent direction.
gfk (Tensor): initial value of value_and_gradient as position.
old_fval (Tensor): The same as gfk.
old_fval (Tensor): The same as `gfk`.
old_old_fval (Tensor): unused argument, only for scipy API compliance.
c1 (float): Wolfe criteria constant, see ref.
c2 (float): The same as c1.
c2 (float): The same as `c1`.
maxiter (int): maximum number of iterations to search
Returns:

View File

@ -69,14 +69,14 @@ def minimize(func, x0, args=(), *, method, tol=None, options=None):
On GPU, the supported dtypes is float32.
Args:
fun (Callable): the objective function to be minimized, ``fun(x, *args) -> float``,
where ``x`` is a 1-D array with shape ``(n,)`` and ``args`` is a tuple
fun (Callable): the objective function to be minimized, :math:`fun(x, *args) -> float`,
where `x` is a 1-D array with shape :math:`(n,)` and `args` is a tuple
of the fixed parameters needed to completely specify the function.
``fun`` must support differentiation.
x0 (Tensor): initial guess. Array of real elements of size ``(n,)``, where ``n`` is
`fun` must support differentiation.
x0 (Tensor): initial guess. Array of real elements of size :math:`(n,)`, where `n` is
the number of independent variables.
args (Tuple): extra arguments passed to the objective function.
method (str): solver type. Currently only ``"BFGS"`` is supported.
method (str): solver type. Currently only `"BFGS"` is supported.
tol (float, optional): tolerance for termination. For detailed control, use solver-specific
options.
options (Mapping[str, Any], optional): a dictionary of solver options. All methods accept the following

View File

@ -193,17 +193,17 @@ def gmres(A, b, x0=None, *, tol=1e-5, atol=0.0, restart=20, maxiter=None,
Args:
A (Union[Tensor, function]): 2D Tensor or function that calculates the linear
map (matrix-vector product) ``Ax`` when called like ``A(x)``.
``A`` must return Tensor with the same structure and shape as its argument.
map (matrix-vector product) :math:`Ax` when called like :math:`A(x)`.
As function, `A` must return Tensor with the same structure and shape as its input matrix.
b (Tensor): Right hand side of the linear system representing a single vector.
Can be stored as a Tensor
Can be stored as a Tensor.
x0 (Tensor, optional): Starting guess for the solution. Must have the same structure
as ``b``. If this is unspecified, zeroes are used.
as `b`. If this is unspecified, zeroes are used.
tol (float, optional): Tolerances for convergence,
``norm(residual) <= max(tol*norm(b), atol)``. We do not implement SciPy's
:math:`norm(residual) <= max(tol*norm(b), atol)`. We do not implement SciPy's
"legacy" behavior, so MindSpore's tolerance will differ from SciPy unless you
explicitly pass ``atol`` to SciPy's ``gmres``.
atol (float, optional): The same as tol.
explicitly pass `atol` to SciPy's `gmres`.
atol (float, optional): The same as `tol`.
restart (integer, optional): Size of the Krylov subspace ("number of iterations")
built between restarts. GMRES works by approximating the true solution x as its
projection into a Krylov space of this dimension - this parameter
@ -211,11 +211,11 @@ def gmres(A, b, x0=None, *, tol=1e-5, atol=0.0, restart=20, maxiter=None,
solution. Larger values increase both number of iterations and iteration
cost, but may be necessary for convergence. The algorithm terminates
early if convergence is achieved before the full subspace is built.
Default is 20.
maxiter (integer): Maximum number of times to rebuild the size-``restart``
Default: 20.
maxiter (int): Maximum number of times to rebuild the size-`restart`
Krylov space starting from the solution found at the last iteration. If GMRES
halts or is very slow, decreasing this parameter may help.
Default is infinite.
Default: infinite.
M (Union[Tensor, function]): Preconditioner for A. The preconditioner should approximate the
inverse of A. Effective preconditioning dramatically improves the
rate of convergence, which implies that fewer iterations are needed
@ -229,7 +229,7 @@ def gmres(A, b, x0=None, *, tol=1e-5, atol=0.0, restart=20, maxiter=None,
iteration. It does not allow for early termination, but has much less overhead on GPUs.
Returns:
- Tensor, The converged solution. Has the same structure as ``b``.
- Tensor, The converged solution. Has the same structure as `b`.
- None, Placeholder for convergence information.
Supported Platforms:
@ -320,11 +320,11 @@ class CG(nn.Cell):
def cg(A, b, x0=None, *, tol=1e-5, atol=0.0, maxiter=None, M=None):
"""Use Conjugate Gradient iteration to solve ``Ax = b``.
The numerics of MindSpore's ``cg`` should exact match SciPy's ``cg`` (up to
The numerics of MindSpore's `cg` should exact match SciPy's `cg` (up to
numerical precision).
Derivatives of ``cg`` are implemented via implicit differentiation with
another ``cg`` solve, rather than by differentiating *through* the solver.
Derivatives of `cg` are implemented via implicit differentiation with
another `cg` solve, rather than by differentiating *through* the solver.
They will be accurate only if both solves converge.
Note:
@ -333,24 +333,24 @@ def cg(A, b, x0=None, *, tol=1e-5, atol=0.0, maxiter=None, M=None):
Args:
A (Union[Tensor, function]): 2D Tensor or function that calculates the linear
map (matrix-vector product) ``Ax`` when called like ``A(x)``.
``A`` must return Tensor with the same structure and shape as its argument.
map (matrix-vector product) :math:`Ax` when called like :math:`A(x)`.
As function, `A` must return Tensor with the same structure and shape as its input matrix.
b (Tensor): Right hand side of the linear system representing a single vector. Can be
stored as a Tensor.
x0 (Tensor): Starting guess for the solution. Must have the same structure as ``b``.
tol (float, optional): Tolerances for convergence, ``norm(residual) <= max(tol*norm(b), atol)``.
x0 (Tensor): Starting guess for the solution. Must have the same structure as `b`.
tol (float, optional): Tolerances for convergence, :math:`norm(residual) <= max(tol*norm(b), atol)`.
We do not implement SciPy's "legacy" behavior, so MindSpore's tolerance will
differ from SciPy unless you explicitly pass ``atol`` to SciPy's ``cg``.
atol (float, optional): The same as tol.
differ from SciPy unless you explicitly pass `atol` to SciPy's `cg`.
atol (float, optional): The same as `tol`.
maxiter (int): Maximum number of iterations. Iteration will stop after maxiter
steps even if the specified tolerance has not been achieved.
M (Union[Tensor, function]): Preconditioner for A. The preconditioner should approximate the
inverse of A. Effective preconditioning dramatically improves the
inverse of A. Effective preconditioning dramatically improves the
rate of convergence, which implies that fewer iterations are needed
to reach a given error tolerance.
Returns:
- Tensor, The converged solution. Has the same structure as ``b``.
- Tensor, The converged solution. Has the same structure as `b`.
- None, Placeholder for convergence information.
Supported Platforms:

View File

@ -29,7 +29,7 @@ _eps_net = ops.Eps()
def _convert_64_to_32(tensor):
"""Convert tensor with float64/int64 types to float32/int32."""
"""Convert Tensor with float64/int64 types to float32/int32."""
if tensor.dtype == mstype.float64:
return tensor.astype("float32")
if tensor.dtype == mstype.int64: