fix doc string

This commit is contained in:
zhujingxuan 2021-12-20 20:37:38 +08:00
parent 76aaf62088
commit 646d1c6114
7 changed files with 66 additions and 65 deletions

View File

@ -31,21 +31,21 @@ def block_diag(*arrs):
Create a block diagonal matrix from provided arrays. Create a block diagonal matrix from provided arrays.
Given the inputs `A`, `B` and `C`, the output will have these Given the inputs `A`, `B` and `C`, the output will have these
Tensor arranged on the diagonal:: Tensors arranged on the diagonal::
[[A, 0, 0], [[A, 0, 0],
[0, B, 0], [0, B, 0],
[0, 0, C]] [0, 0, C]]
Args: Args:
A, B, C, ... (Tensor): up to 2-D `A`, `B`, `C`, ... (Tensor): up to 2-D Input Tensors.
Input Tensors. A 1-D Tensor or a 2-D Tensor with shape ``(1,n)``. A 1-D Tensor or a 2-D Tensor with shape :math:`(1,n)`.
Returns: Returns:
Tensor with `A`, `B`, `C`, ... on the diagonal which has the same dtype as `A`. Tensor with `A`, `B`, `C`, ... on the diagonal which has the same dtype as `A`.
Raises: Raises:
ValueError: If there are tensors with dimensions higher than 2 in all arguments. ValueError: If there are Tensors with dimensions higher than 2 in all arguments.
Supported Platforms: Supported Platforms:
``CPU`` ``GPU`` ``CPU`` ``GPU``
@ -97,7 +97,7 @@ def solve_triangular(A, b, trans=0, lower=False, unit_diagonal=False,
Args: Args:
A (Tensor): A triangular matrix of shape :math:`(N, N)`. A (Tensor): A triangular matrix of shape :math:`(N, N)`.
b (Tensor): A tensor of shape :math:`(M,)` or :math:`(M, N)`. b (Tensor): A Tensor of shape :math:`(M,)` or :math:`(M, N)`.
Right-hand side matrix in :math:`A x = b`. Right-hand side matrix in :math:`A x = b`.
lower (bool, optional): Use only data contained in the lower triangle of `a`. lower (bool, optional): Use only data contained in the lower triangle of `a`.
Default is to use upper triangle. Default is to use upper triangle.
@ -321,31 +321,31 @@ def eigh(a, b=None, lower=True, eigvals_only=False, overwrite_a=False,
Solve a standard or generalized eigenvalue problem for a complex Solve a standard or generalized eigenvalue problem for a complex
Hermitian or real symmetric matrix. Hermitian or real symmetric matrix.
Find eigenvalues Tensor ``w`` and optionally eigenvectors Tensor ``v`` of Tensor ``a``, Find eigenvalues Tensor `w` and optionally eigenvectors Tensor `v` of Tensor `a`,
where ``b`` is positive definite such that for every eigenvalue λ (i-th entry of w) and where `b` is positive definite such that for every eigenvalue `λ` (i-th entry of w) and
its eigenvector ``vi`` (i-th column of``v``) satisfies:: its eigenvector `vi` (i-th column of `v`) satisfies:
a @ vi = λ * b @ vi a @ vi = λ * b @ vi
vi.conj().T @ a @ vi = λ vi.conj().T @ a @ vi = λ
vi.conj().T @ b @ vi = 1 vi.conj().T @ b @ vi = 1
In the standard problem, ``b`` is assumed to be the identity matrix. In the standard problem, `b` is assumed to be the identity matrix.
Args: Args:
a (Tensor): A (M, M) complex Hermitian or real symmetric matrix whose eigenvalues and a (Tensor): A :math:`(M, M)` complex Hermitian or real symmetric matrix whose eigenvalues and
eigenvectors will be computed. eigenvectors will be computed.
b (Tensor, optional): A (M, M) complex Hermitian or real symmetric definite positive matrix in. b (Tensor, optional): A :math:`(M, M)` complex Hermitian or real symmetric definite positive matrix in.
If omitted, identity matrix is assumed. If omitted, identity matrix is assumed.
lower (bool, optional): Whether the pertinent Tensor data is taken from the lower or upper lower (bool, optional): Whether the pertinent Tensor data is taken from the lower or upper
triangle of ``a`` and, if applicable, ``b``. (Default: lower) triangle of `a` and, if applicable, `b`. Default: True.
eigvals_only (bool, optional): Whether to calculate only eigenvalues and no eigenvectors. eigvals_only (bool, optional): Whether to calculate only eigenvalues and no eigenvectors.
(Default: both are calculated) Default: False.
_type (int, optional): For the generalized problems, this keyword specifies the problem type _type (int, optional): For the generalized problems, this keyword specifies the problem type
to be solved for ``w`` and ``v`` (only takes 1, 2, 3 as possible inputs):: to be solved for `w` and `v` (only takes 1, 2, 3 as possible inputs):
1 => a @ v = w @ b @ v 1 => a @ v = w @ b @ v
2 => a @ b @ v = w @ v 2 => a @ b @ v = w @ v
3 => b @ a @ v = w @ v 3 => b @ a @ v = w @ v
This keyword is ignored for standard problems. This keyword is ignored for standard problems.
overwrite_a (bool, optional): Whether to overwrite data in ``a`` (may improve performance). Default is False. overwrite_a (bool, optional): Whether to overwrite data in `a` (may improve performance). Default: False.
overwrite_b (bool, optional): Whether to overwrite data in ``b`` (may improve performance). Default is False. overwrite_b (bool, optional): Whether to overwrite data in `b` (may improve performance). Default is False.
check_finite (bool, optional): Whether to check that the input matrices contain only finite numbers. check_finite (bool, optional): Whether to check that the input matrices contain only finite numbers.
Disabling may give a performance gain, but may result in problems (crashes, non-termination) Disabling may give a performance gain, but may result in problems (crashes, non-termination)
if the inputs do contain infinities or NaNs. if the inputs do contain infinities or NaNs.
@ -353,13 +353,13 @@ def eigh(a, b=None, lower=True, eigvals_only=False, overwrite_a=False,
for generalized eigenvalue problem and if full set of eigenvalues are requested.). for generalized eigenvalue problem and if full set of eigenvalues are requested.).
Has no significant effect if eigenvectors are not requested. Has no significant effect if eigenvectors are not requested.
eigvals (tuple, optional): Indexes of the smallest and largest (in ascending order) eigenvalues eigvals (tuple, optional): Indexes of the smallest and largest (in ascending order) eigenvalues
and corresponding eigenvectors to be returned: 0 <= lo <= hi <= M-1. If omitted, all eigenvalues and corresponding eigenvectors to be returned: :math:`0 <= lo <= hi <= M-1`. If omitted, all eigenvalues
and eigenvectors are returned. and eigenvectors are returned.
Returns: Returns:
- Tensor with shape (N,), The N (1<=N<=M) selected eigenvalues, in ascending order, - Tensor with shape :math:`(N,)`, The :math:`N (1<=N<=M)` selected eigenvalues, in ascending order,
each repeated according to its multiplicity. each repeated according to its multiplicity.
- Tensor with shape (M, N), (if ``eigvals_only == False``) - Tensor with shape :math:`(M, N)`, (if :math:`eigvals_only == False`)
Raises: Raises:
LinAlgError: If eigenvalue computation does not converge, an error occurred, or b matrix is not LinAlgError: If eigenvalue computation does not converge, an error occurred, or b matrix is not
@ -445,20 +445,21 @@ def lu_factor(a, overwrite_a=False, check_finite=True):
""" """
Compute pivoted LU decomposition of a matrix. Compute pivoted LU decomposition of a matrix.
The decomposition is:: The decomposition is::
.. math::
A = P L U A = P L U
where P is a permutation matrix, L lower triangular with unit diagonal elements, and U upper triangular. where P is a permutation matrix, L lower triangular with unit diagonal elements, and U upper triangular.
Args: Args:
a (Tensor): square matrix of (M, M) to decompose a (Tensor): square matrix of :math:`(M, M)` to decompose.
overwrite_a (bool, optional): Whether to overwrite data in A (may increase performance) overwrite_a (bool, optional): Whether to overwrite data in `A` (may increase performance).
check_finite (bool, optional): Whether to check that the input matrix contains only finite numbers. check_finite (bool, optional): Whether to check that the input matrix contains only finite numbers.
Disabling may give a performance gain, but may result in problems Disabling may give a performance gain, but may result in problems
(crashes, non-termination) if the inputs do contain infinities or NaNs. (crashes, non-termination) if the inputs do contain infinities or NaNs.
Returns: Returns:
Tensor, a square matrix of (N, N) containing U in its upper triangle, and L in its lower triangle. Tensor, a square matrix of :math:`(N, N)` containing `U` in its upper triangle, and `L` in its lower triangle.
The unit diagonal elements of L are not stored. The unit diagonal elements of `L` are not stored.
Tensor, (N,) Pivot indices representing the permutation matrix P: Tensor, :math:`(N,)` Pivot indices representing the permutation matrix `P`:
row i of matrix was interchanged with row piv[i]. row i of matrix was interchanged with row piv[i].
Supported Platforms: Supported Platforms:
@ -496,8 +497,8 @@ def lu(a, permute_l=False, overwrite_a=False, check_finite=True):
diagonal elements, and U upper triangular. diagonal elements, and U upper triangular.
Args: Args:
a (Tensor): a (M, N) matrix to decompose. a (Tensor): a :math:`(M, N)` matrix to decompose.
permute_l (bool, optional): Perform the multiplication P*L (Default: do not permute). permute_l (bool, optional): Perform the multiplication :math:`P*L` (Default: do not permute).
overwrite_a (bool, optional): Whether to overwrite data in a (may improve performance). overwrite_a (bool, optional): Whether to overwrite data in a (may improve performance).
check_finite (bool, optional): Whether to check that the input matrix contains check_finite (bool, optional): Whether to check that the input matrix contains
only finite numbers. Disabling may give a performance gain, but may result only finite numbers. Disabling may give a performance gain, but may result
@ -506,14 +507,14 @@ def lu(a, permute_l=False, overwrite_a=False, check_finite=True):
Returns: Returns:
**(If permute_l == False)** **(If permute_l == False)**
- Tensor, (M, M) Permutation matrix. - Tensor, :math:`(M, M)` Permutation matrix.
- Tensor, (M, K) Lower triangular or trapezoidal matrix with unit diagonal. K = min(M, N). - Tensor, :math:`(M, K)` Lower triangular or trapezoidal matrix with unit diagonal. :math:`K = min(M, N)`.
- Tensor, (K, N) Upper triangular or trapezoidal matrix. - Tensor, :math:`(K, N)` Upper triangular or trapezoidal matrix.
**(If permute_l == True)** **(If permute_l == True)**
- Tensor, (M, K) Permuted L matrix. K = min(M, N). - Tensor, :math:`(M, K)` Permuted L matrix. :math:`K = min(M, N)`.
- Tensor, (K, N) Upper triangular or trapezoidal matrix. - Tensor, :math:`(K, N)` Upper triangular or trapezoidal matrix.
Supported Platforms: Supported Platforms:
``CPU`` ``GPU`` ``CPU`` ``GPU``

View File

@ -40,10 +40,10 @@ class SolveTriangular(PrimitiveWithInfer):
Inputs: Inputs:
- **A** (Tensor) - A triangular matrix of shape :math:`(N, N)`. - **A** (Tensor) - A triangular matrix of shape :math:`(N, N)`.
- **b** (Tensor) - A tensor of shape :math:`(M,)` or :math:`(M, N)`. Right-hand side matrix in :math:`A x = b`. - **b** (Tensor) - A Tensor of shape :math:`(M,)` or :math:`(M, N)`. Right-hand side matrix in :math:`A x = b`.
Returns: Returns:
- **x** (Tensor) - A tensor of shape :math:`(M,)` or :math:`(M, N)`, - **x** (Tensor) - A Tensor of shape :math:`(M,)` or :math:`(M, N)`,
which is the solution to the system :math:`A x = b`. which is the solution to the system :math:`A x = b`.
Shape of :math:`x` matches :math:`b`. Shape of :math:`x` matches :math:`b`.
@ -150,7 +150,7 @@ class CholeskySolver(PrimitiveWithInfer):
Inputs: Inputs:
- **A** (Tensor) - A matrix of shape :math:`(M, M)` to be decomposed. - **A** (Tensor) - A matrix of shape :math:`(M, M)` to be decomposed.
- **b** (Tensor) - A tensor of shape :math:`(M,)` or :math:`(..., M)`. - **b** (Tensor) - A Tensor of shape :math:`(M,)` or :math:`(..., M)`.
Right-hand side matrix in :math:`A x = b`. Right-hand side matrix in :math:`A x = b`.
Returns Returns
------- -------

View File

@ -41,7 +41,7 @@ class _BFGSResults(NamedTuple):
search converged, then this is the (local) minimum of the objective search converged, then this is the (local) minimum of the objective
function. function.
g_k (Tensor): containing the gradient of the objective function at `x_k`. If g_k (Tensor): containing the gradient of the objective function at `x_k`. If
the search converged the l2-norm of this tensor should be below the the search converged the l2-norm of this Tensor should be below the
tolerance. tolerance.
H_k (Tensor): containing the inverse of the estimated Hessian. H_k (Tensor): containing the inverse of the estimated Hessian.
old_old_fval (float): Function value for the point preceding x=x_k. old_old_fval (float): Function value for the point preceding x=x_k.

View File

@ -314,10 +314,10 @@ def line_search(f, xk, pk, gfk=None, old_fval=None, old_old_fval=None, c1=1e-4,
xk (Tensor): initial guess. xk (Tensor): initial guess.
pk (Tensor): direction to search in. Assumes the direction is a descent direction. pk (Tensor): direction to search in. Assumes the direction is a descent direction.
gfk (Tensor): initial value of value_and_gradient as position. gfk (Tensor): initial value of value_and_gradient as position.
old_fval (Tensor): The same as gfk. old_fval (Tensor): The same as `gfk`.
old_old_fval (Tensor): unused argument, only for scipy API compliance. old_old_fval (Tensor): unused argument, only for scipy API compliance.
c1 (float): Wolfe criteria constant, see ref. c1 (float): Wolfe criteria constant, see ref.
c2 (float): The same as c1. c2 (float): The same as `c1`.
maxiter (int): maximum number of iterations to search maxiter (int): maximum number of iterations to search
Returns: Returns:

View File

@ -69,14 +69,14 @@ def minimize(func, x0, args=(), *, method, tol=None, options=None):
On GPU, the supported dtypes is float32. On GPU, the supported dtypes is float32.
Args: Args:
fun (Callable): the objective function to be minimized, ``fun(x, *args) -> float``, fun (Callable): the objective function to be minimized, :math:`fun(x, *args) -> float`,
where ``x`` is a 1-D array with shape ``(n,)`` and ``args`` is a tuple where `x` is a 1-D array with shape :math:`(n,)` and `args` is a tuple
of the fixed parameters needed to completely specify the function. of the fixed parameters needed to completely specify the function.
``fun`` must support differentiation. `fun` must support differentiation.
x0 (Tensor): initial guess. Array of real elements of size ``(n,)``, where ``n`` is x0 (Tensor): initial guess. Array of real elements of size :math:`(n,)`, where `n` is
the number of independent variables. the number of independent variables.
args (Tuple): extra arguments passed to the objective function. args (Tuple): extra arguments passed to the objective function.
method (str): solver type. Currently only ``"BFGS"`` is supported. method (str): solver type. Currently only `"BFGS"` is supported.
tol (float, optional): tolerance for termination. For detailed control, use solver-specific tol (float, optional): tolerance for termination. For detailed control, use solver-specific
options. options.
options (Mapping[str, Any], optional): a dictionary of solver options. All methods accept the following options (Mapping[str, Any], optional): a dictionary of solver options. All methods accept the following

View File

@ -193,17 +193,17 @@ def gmres(A, b, x0=None, *, tol=1e-5, atol=0.0, restart=20, maxiter=None,
Args: Args:
A (Union[Tensor, function]): 2D Tensor or function that calculates the linear A (Union[Tensor, function]): 2D Tensor or function that calculates the linear
map (matrix-vector product) ``Ax`` when called like ``A(x)``. map (matrix-vector product) :math:`Ax` when called like :math:`A(x)`.
``A`` must return Tensor with the same structure and shape as its argument. As function, `A` must return Tensor with the same structure and shape as its input matrix.
b (Tensor): Right hand side of the linear system representing a single vector. b (Tensor): Right hand side of the linear system representing a single vector.
Can be stored as a Tensor Can be stored as a Tensor.
x0 (Tensor, optional): Starting guess for the solution. Must have the same structure x0 (Tensor, optional): Starting guess for the solution. Must have the same structure
as ``b``. If this is unspecified, zeroes are used. as `b`. If this is unspecified, zeroes are used.
tol (float, optional): Tolerances for convergence, tol (float, optional): Tolerances for convergence,
``norm(residual) <= max(tol*norm(b), atol)``. We do not implement SciPy's :math:`norm(residual) <= max(tol*norm(b), atol)`. We do not implement SciPy's
"legacy" behavior, so MindSpore's tolerance will differ from SciPy unless you "legacy" behavior, so MindSpore's tolerance will differ from SciPy unless you
explicitly pass ``atol`` to SciPy's ``gmres``. explicitly pass `atol` to SciPy's `gmres`.
atol (float, optional): The same as tol. atol (float, optional): The same as `tol`.
restart (integer, optional): Size of the Krylov subspace ("number of iterations") restart (integer, optional): Size of the Krylov subspace ("number of iterations")
built between restarts. GMRES works by approximating the true solution x as its built between restarts. GMRES works by approximating the true solution x as its
projection into a Krylov space of this dimension - this parameter projection into a Krylov space of this dimension - this parameter
@ -211,11 +211,11 @@ def gmres(A, b, x0=None, *, tol=1e-5, atol=0.0, restart=20, maxiter=None,
solution. Larger values increase both number of iterations and iteration solution. Larger values increase both number of iterations and iteration
cost, but may be necessary for convergence. The algorithm terminates cost, but may be necessary for convergence. The algorithm terminates
early if convergence is achieved before the full subspace is built. early if convergence is achieved before the full subspace is built.
Default is 20. Default: 20.
maxiter (integer): Maximum number of times to rebuild the size-``restart`` maxiter (int): Maximum number of times to rebuild the size-`restart`
Krylov space starting from the solution found at the last iteration. If GMRES Krylov space starting from the solution found at the last iteration. If GMRES
halts or is very slow, decreasing this parameter may help. halts or is very slow, decreasing this parameter may help.
Default is infinite. Default: infinite.
M (Union[Tensor, function]): Preconditioner for A. The preconditioner should approximate the M (Union[Tensor, function]): Preconditioner for A. The preconditioner should approximate the
inverse of A. Effective preconditioning dramatically improves the inverse of A. Effective preconditioning dramatically improves the
rate of convergence, which implies that fewer iterations are needed rate of convergence, which implies that fewer iterations are needed
@ -229,7 +229,7 @@ def gmres(A, b, x0=None, *, tol=1e-5, atol=0.0, restart=20, maxiter=None,
iteration. It does not allow for early termination, but has much less overhead on GPUs. iteration. It does not allow for early termination, but has much less overhead on GPUs.
Returns: Returns:
- Tensor, The converged solution. Has the same structure as ``b``. - Tensor, The converged solution. Has the same structure as `b`.
- None, Placeholder for convergence information. - None, Placeholder for convergence information.
Supported Platforms: Supported Platforms:
@ -320,11 +320,11 @@ class CG(nn.Cell):
def cg(A, b, x0=None, *, tol=1e-5, atol=0.0, maxiter=None, M=None): def cg(A, b, x0=None, *, tol=1e-5, atol=0.0, maxiter=None, M=None):
"""Use Conjugate Gradient iteration to solve ``Ax = b``. """Use Conjugate Gradient iteration to solve ``Ax = b``.
The numerics of MindSpore's ``cg`` should exact match SciPy's ``cg`` (up to The numerics of MindSpore's `cg` should exact match SciPy's `cg` (up to
numerical precision). numerical precision).
Derivatives of ``cg`` are implemented via implicit differentiation with Derivatives of `cg` are implemented via implicit differentiation with
another ``cg`` solve, rather than by differentiating *through* the solver. another `cg` solve, rather than by differentiating *through* the solver.
They will be accurate only if both solves converge. They will be accurate only if both solves converge.
Note: Note:
@ -333,24 +333,24 @@ def cg(A, b, x0=None, *, tol=1e-5, atol=0.0, maxiter=None, M=None):
Args: Args:
A (Union[Tensor, function]): 2D Tensor or function that calculates the linear A (Union[Tensor, function]): 2D Tensor or function that calculates the linear
map (matrix-vector product) ``Ax`` when called like ``A(x)``. map (matrix-vector product) :math:`Ax` when called like :math:`A(x)`.
``A`` must return Tensor with the same structure and shape as its argument. As function, `A` must return Tensor with the same structure and shape as its input matrix.
b (Tensor): Right hand side of the linear system representing a single vector. Can be b (Tensor): Right hand side of the linear system representing a single vector. Can be
stored as a Tensor. stored as a Tensor.
x0 (Tensor): Starting guess for the solution. Must have the same structure as ``b``. x0 (Tensor): Starting guess for the solution. Must have the same structure as `b`.
tol (float, optional): Tolerances for convergence, ``norm(residual) <= max(tol*norm(b), atol)``. tol (float, optional): Tolerances for convergence, :math:`norm(residual) <= max(tol*norm(b), atol)`.
We do not implement SciPy's "legacy" behavior, so MindSpore's tolerance will We do not implement SciPy's "legacy" behavior, so MindSpore's tolerance will
differ from SciPy unless you explicitly pass ``atol`` to SciPy's ``cg``. differ from SciPy unless you explicitly pass `atol` to SciPy's `cg`.
atol (float, optional): The same as tol. atol (float, optional): The same as `tol`.
maxiter (int): Maximum number of iterations. Iteration will stop after maxiter maxiter (int): Maximum number of iterations. Iteration will stop after maxiter
steps even if the specified tolerance has not been achieved. steps even if the specified tolerance has not been achieved.
M (Union[Tensor, function]): Preconditioner for A. The preconditioner should approximate the M (Union[Tensor, function]): Preconditioner for A. The preconditioner should approximate the
inverse of A. Effective preconditioning dramatically improves the inverse of A. Effective preconditioning dramatically improves the
rate of convergence, which implies that fewer iterations are needed rate of convergence, which implies that fewer iterations are needed
to reach a given error tolerance. to reach a given error tolerance.
Returns: Returns:
- Tensor, The converged solution. Has the same structure as ``b``. - Tensor, The converged solution. Has the same structure as `b`.
- None, Placeholder for convergence information. - None, Placeholder for convergence information.
Supported Platforms: Supported Platforms:

View File

@ -29,7 +29,7 @@ _eps_net = ops.Eps()
def _convert_64_to_32(tensor): def _convert_64_to_32(tensor):
"""Convert tensor with float64/int64 types to float32/int32.""" """Convert Tensor with float64/int64 types to float32/int32."""
if tensor.dtype == mstype.float64: if tensor.dtype == mstype.float64:
return tensor.astype("float32") return tensor.astype("float32")
if tensor.dtype == mstype.int64: if tensor.dtype == mstype.int64: