0

Suppose matrices $A\in\mathbb{R}^{2\times3}$ and $B\in\mathbb{R}^{3\times3}$ are available for computing $(AB^{-1}A^T)^{-1}$.

If $A$ was square we could simplify it to $A^{-T}BA^{-1}$. Is a simplification possible if $A$ is non-square?

Background

The application here is to calculate the precision matrix (inverse covariance matrix) of a Gaussian random variable $y=Ax$ where $y\in\mathbb{R}^2$ and $x\in\mathbb{R}^3$~$N(\mu,B^{-1})$, where $B$ is known and could be close to singular (hence the desire to work with precision matrices rather than covariance matrices).

Museful
  • 869

1 Answers1

1

The pseudo-inverse of a non-square matrix can be (amongst some other methods) computed using the singular value decomposition: Suppose $A = U\Sigma V^T$, where $U \in \mathbb{R}^{2\times2}, V\in \mathbb{R}^{3\times3}$ are unitary matrices and $\Sigma \in \mathbb{R}^{2\times3}$ is a matrix with on the diagonal the singular values of $A$. the inverse can then be estimated as: $$A^\dagger = V \Sigma_{-1} U^T\in \mathbb{R}^{3\times2}$$ Where $\Sigma_{-1}$ means that you take the numerical inverse of every non-zero number in $\Sigma$ and transpose the result (if anybody got a better notation for this, let me know). As $A$ is not square, only postmultiplying this pseudo-inverse with $A$ will yield an identity matrix of size $\mathbb{R}^{2\times2}$. So the inverse of $AB^{-1}A^T$ is a bit of a conundrum: no pre-multiplication or post-multiplication with $A^\dagger$ will isolate $B^{-1}$ in this equation. Therefore, I think that one cannot simplify this inverse any further.