Code Reference

Projected Riccati equations

proj_ric_utils.pymess_dae2_cnt_riccati(mmat=None, amat=None, jmat=None, bmat=None, wmat=None, z0=None, mtxoldb=None, transposed=False, aditol=5e-10, nwtn_res2_tol=5e-08, maxit=20, verbose=False, linesearch=False, **kw)[source]

solve the projected algebraic ricc via newton adi

M.T*X*A + A.T*X*M - M.T*X*B*B.T*X*M + J(Y) = -WW.T

JXM = 0 and M.TXJ.T = 0

If mtxb is given, (e.g. as the feedback computed in a previous step of a Newton iteration), the coefficient matrix with feedback

A.T <- A.T - mtxb*b

is considered

proj_ric_utils.comp_proj_lyap_res_norm(Z, amat=None, mmat=None, wmat=None, jmat=None, umat=None, vmat=None, Sinv=None)[source]

compute the squared f norm of projected lyap residual

res = Pt*[ Ft*ZZt*M + Mt*ZZt*F + W*Wt ]*P

proj_ric_utils.solve_proj_lyap_stein(amat=None, jmat=None, wmat=None, mmat=None, umat=None, vmat=None, transposed=False, adi_dict={'adi_max_steps': 150, 'adi_newZ_reltol': 1e-08}, nwtn_adi_dict=None, **kw)[source]

approximates the solution X to the projected lyap equation

[A-UV].T*X*M + M.T*X*[A-UV] + J.T*Y*M + M.T*Y.T*J = -W*W.T

J*X*M = 0 and M.T*X*J.T = 0

by considering the equivalent Stein eqns and computing the first members of the series converging to X

We use the SMW formula: (A-UV).-1 = A.-1 + A.-1*U [ I - V A.-1 U].-1 A.-1

for the transpose:

(A-UV).-T = A.-T + A.-T*Vt [ I - Ut A.-T Vt ].-1 A.-T

= (A.T - Vt Ut).-1

see numOptAff.pdf

proj_ric_utils.proj_alg_ric_newtonadi(mmat=None, amat=None, jmat=None, bmat=None, wmat=None, z0=None, mtxoldb=None, transposed=False, nwtn_adi_dict={'adi_max_steps': 150, 'adi_newZ_reltol': 1e-05, 'nwtn_max_steps': 14, 'nwtn_upd_reltol': 1e-08}, **kw)[source]

solve the projected algebraic ricc via newton adi

M.T*X*A + A.T*X*M - M.T*X*B*B.T*X*M + J(Y) = -WW.T

JXM = 0 and M.TXJ.T = 0

If mtxb is given, (e.g. as the feedback computed in a previous step of a Newton iteration), the coefficient matrix with feedback

A.T <- A.T - mtxb*b

is considered

Linear algebra utilities

lin_alg_utils.app_prj_via_sadpnt(amat=None, jmat=None, rhsv=None, jmatT=None, umat=None, vmat=None, transposedprj=False)[source]

Apply a projection via solving a saddle point problem.

The theory is as follows. Consider the projection

\[P = I - A^{-1}J_1^T(J_1^TA^{-1}J_2)^{-1}J_2.\]

Then \(Pv\) can be obtained via

\[\begin{split}\mathcal{A}^{-1}\begin{bmatrix} Pv \\ * \end{bmatrix} = \begin{bmatrix} Av \\ 0 \end{bmatrix},\end{split}\]

where

\[\begin{split}\mathcal{A} := \begin{bmatrix} A & J_1^T \\ J_2 & 0 \end{bmatrix}.\end{split}\]

And \(P^Tv\) can be obtained via

\[\begin{split}\mathcal A^{-T}\begin{bmatrix} A^{-T}P^Tv \\ * \end{bmatrix} = \begin{bmatrix} v \\ 0 \end{bmatrix}.\end{split}\]
Parameters:

amat : (N,N) sparse matrix

left upper entry in the saddlepoint matrix

jmat : (M,N) sparse matrix

left lower entry

jmatT : (N,M) sparse matrix, optional

right upper entry, defaults to jmat.T

rhsv : (N,K) ndarray

array to be projected

umat, vmat : (N,L), (L,N) ndarrays or sparse matrices, optional

factored contribution to amat <- amat - umat*vmat, default to None

transposedprj : boolean

whether to apply the transpose of the projection, defaults to False

Returns:

, : (N,K) ndarray

projected rhsv

lin_alg_utils.solve_sadpnt_smw(amat=None, jmat=None, rhsv=None, jmatT=None, umat=None, vmat=None, rhsp=None, sadlu=None, return_alu=False, decouplevp=False, solve_A=None, symmetric=False, posdefinite=False, cgtol=1e-08, krylov=None, krpslvprms={}, krplsprms={})[source]

solve a saddle point system

\[\begin{split}\begin{bmatrix} A - UV & J_1^T \\ J & 0 \end{bmatrix} \begin{bmatrix} X \\ * \end{bmatrix} = \begin{bmatrix} f_v \\ f_p \end{bmatrix}\end{split}\]

making use of the Sherman-Morrison-Woodbury formula and

  • sparse direct solves by recycling an LU decomposition
  • or CG (if the problem is decoupled and the Schur complement is spd)
Parameters:

amat : (N,N) sparse matrix

left upper entry in the saddlepoint matrix

jmat : (M,N) sparse matrix

left lower entry

jmatT : (N,M) sparse matrix, optional

right upper entry, defaults to jmat.T

rhsv : (N,K) ndarray

upper part of right hand side

rhsp : (M,K) ndarray, optional

lower part of right hand side, defaults to zero

umat, vmat : (N,L), (L,N) ndarrays or sparse matrices, optional

factored contribution to amat, default to None

sadlu : callable f(v), optional

returns the inverse of the sadpoint matrix applied to v, defaults to None

return_alu : boolean, optional

whether to return the lu factored sadpoint matrix

decouplevp : boolean, optional

whether to decouple the system and solve with A and the Schur Complement, defautls to False

Returns:

, : (N,K) ndarray

projected rhsv

, : f(v) callable, optional

lu decomposition of the saddlepoint matrix

lin_alg_utils.apply_sqrt_fromright(M, rhsa, output=None)[source]

apply the sqrt of a mass matrix or other spd

TODO: cases for dense and sparse INPUTS

Parameters:

M : (N,N) sparse matrix

symmetric strictly positive definite

rhsa : (K,N) ndarray array or sparse matrix

array the inverse of M is to be applied to

output : string, optional

set to ‘sparse’ if rhsa has many zero rows to get the output as a sparse matrix

Returns:

, : (N,K) ndarray or sparse matrix

the sqrt of the inverse of M applied to rhsa from the left

lin_alg_utils.apply_invsqrt_fromright(M, rhsa, output=None)[source]

apply the sqrt of the inverse of a mass matrix or other spd

TODO: cases for dense and sparse INPUTS

Parameters:

M : (N,N) sparse matrix

symmetric strictly positive definite

rhsa : (K,N) ndarray array or sparse matrix

array the inverse of M is to be applied to

output : string, optional

set to ‘sparse’ if rhsa has many zero rows to get the output as a sparse matrix

Returns:

, : (N,K) ndarray or sparse matrix

the sqrt of the inverse of M applied to rhsa from the left

lin_alg_utils.get_Sinv_smw(amat_lu, umat=None, vmat=None)[source]

compute inverse of I-V*Ainv*U as it is needed for

the application of the Sherman-Morrison-Woodbury formula

Parameters:

amat_lu : callable f(v) or (N,N) sparse matrix

the main part of the matrix A-UV possibly lu-factored

umat, vmat : (N,L), (L,N) ndarrays or sparse matrices

factored contribution to amat_lu

Returns:

, : (L,L) ndarray

small inverse in the smw update

lin_alg_utils.app_luinv_to_spmat(alu_solve, Z)[source]

compute A.-1*Z where A comes factored

and with a solve routine for possibly complex Z

Parameters:

alu_solve : callable f(v)

returning a matrix inverse applied to v

Z : (N,K) ndarray, real or complex

the inverse is to be applied to

Returns:

, : (N,K) ndarray

matrix inverse applied to ndarray

lin_alg_utils.comp_sqfnrm_factrd_diff(zone, ztwo, ret_sing_norms=False)[source]

compute the squared Frobenius norm of z1*z1.T - z2*z2.T

using the linearity traces and that tr(z1.dot(z2)) = tr(z2.dot(z1)) and that tr(z1.dot(z1.T)) is faster computed via (z1*z1.sum(-1)).sum()

lin_alg_utils.comp_sqfnrm_factrd_lyap_res(A, B, C)[source]

compute the squared Frobenius norm of A*B.T + B*A.T + C*C.T

using the linearity traces and that tr.(z1.dot(z2)) = tr(z2.dot(z1)) and that tr(z1.dot(z1.T)) is faster computed via (z1*z1.sum(-1)).sum()

lin_alg_utils.comp_sqfnrm_factrd_sum(zone, ztwo, ret_sing_norms=False)[source]

compute the squared Frobenius norm of z1*z1.T + z2*z2.T

using the linearity traces and that tr.(z1.dot(z2)) = tr(z2.dot(z1)) and that tr(z1.dot(z1.T)) is faster computed via (z1*z1.sum(-1)).sum()

lin_alg_utils.mm_dnssps(A, v)[source]

compute A*v for sparse or dense A