This is a question that I've been mulling on for a bit. Given some linear combination of matrices:
\begin{equation*} w_{0}A_{0} + w_{1}A_{1} + \ldots + w_{n}A_{n} \end{equation*}
Assuming that all of these matrices are invertible and known (i.e., each $A_{i}^{-1}$ is known beforehand for all $i$.), and additionally assuming that the resulting matrix inverse actually exist (e.g., under floating point arithmetic), is there some way to efficiently compute the inverse of this matrix?
Or in other words, can I somehow compute the matrix:
\begin{equation*} (w_{0}A_{0} + w_{1}A_{1} + \ldots + w_{n}A_{n})^{-1} \end{equation*}
Using the matrices $A_{i}^{-1}$? Alternatively, is there a way to approximate this matrix under the same assumptions?
In my case, these matrices are all 4x4 affine transformation matrices (i.e., some combination of a scaling, rotation and translation transform) which may simplify things in practice, but I'm also curious if this is possible for general matrices.
I think this is related to this question Inverse of the sum of matrices, however, it is not exactly clear to me how this extends to multiple (weighted) matrices, if it actually does so.