3

I'm very new to signal processing (seismic 1,2 and 3D-signal) and have read many papers recently. One thing I encounter quite often is the use of adjoint matrix.

If $d = Am$ where $d$ is the data, $A$ is an operator that models a physical process and $m$ is the model, many people define $\widetilde m=A^Td$ or $\widetilde m=A^*d$ be the pseudo-inverse for $m$ and sometimes use $\widetilde m$ as a starting model. I know $A^{-1}=A^T$ sometimes but in general it's not the case. So how good is this pseudo-inverse?

Can someone give me some more insight or give me some reference on this?

Terrence
  • 61
  • 3
  • Are you talking about unitary matrices? http://math.stackexchange.com/questions/2053194/what-is-the-difference-between-transpose-and-inverse/2203220#2203220 – dantopa Mar 26 '17 at 02:05

1 Answers1

4

There may be some confusion.

The class of matrices for which $$ \mathbf{U}^{*} \mathbf{U} = \mathbf{U} \, \mathbf{U}^{*} = \mathbf{I} $$ are called unitary.

In general, use of the pseudoinverse matrix implies the classical matrix inverse does not exist. (When the classic inverse exists, it is equal to the pseudoinverse).



To address the concerns of @Michael May:

The Moore-Penrose pseudoinverse is defined and manipulated in the following references.

Your specific question seems to be when does the transpose of a matrix, $\mathbf{A}^{*}$, equal the pseudoinverse of the matrix, $\mathbf{A}^{+}$? $$ \mathbf{A}^{*} = \mathbf{A}^{+} \, \mathbf{U}^{*} = \mathbf{I} $$

Given a matrix $\mathbf{A}^{*}\in\mathbb{C}^{m\times n}_{\rho}$, with $\rho \le \min(m,n)$, the singular value decomposition can be expressed in terms of the fundamental subspaces as: $$ \begin{align} \mathbf{A} = \mathbf{U} \, \Sigma \, \mathbf{V}^{*} % = % V \left[ \begin{array}{cc} \color{blue}{\mathbf{U}_{\mathcal{R}}} & \color{red}{\mathbf{U}_{\mathcal{N}}} \end{array} \right] % Sigma \left[ \begin{array}{cc} \mathbf{S} & \mathbf{0} \\ %fixed typo here \mathbf{0} & \mathbf{0} \end{array} \right] % U* \left[ \begin{array}{c} \color{blue}{\mathbf{V}_{\mathcal{R}}}^{*} \\ \color{red}{\mathbf{V}_{\mathcal{N}}}^{*} \end{array} \right] \\ % \end{align} $$

The adjoint matrix is then $$ \begin{align} \mathbf{A}^{*} = \mathbf{V} \, \Sigma^{T} \, \mathbf{U}^{*} % = % V \left[ \begin{array}{cc} \color{blue}{\mathbf{V}_{\mathcal{R}}} & \color{red}{\mathbf{V}_{\mathcal{N}}} \end{array} \right] % Sigma \left[ \begin{array}{cc} \mathbf{S}^{T} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{array} \right] % U* \left[ \begin{array}{c} \color{blue}{\mathbf{U}_{\mathcal{R}}}^{*} \\ \color{red}{\mathbf{U}_{\mathcal{N}}}^{*} \end{array} \right] \\ % \end{align} $$ Because the singular values are real, and the matrix is diagonal, $$ \mathbf{S} = \mathbf{S}^{T} $$

The Moore-Penrose pseudoinverse is $$ \begin{align} \mathbf{A}^{+} = \mathbf{V} \, \Sigma^{-1} \, \mathbf{U}^{*} % = % V \left[ \begin{array}{cc} \color{blue}{\mathbf{V}_{\mathcal{R}}} & \color{red}{\mathbf{V}_{\mathcal{N}}} \end{array} \right] % Sigma \left[ \begin{array}{cc} \mathbf{S}^{-1} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{array} \right] % U* \left[ \begin{array}{c} \color{blue}{\mathbf{U}_{\mathcal{R}}}^{*} \\ \color{red}{\mathbf{U}_{\mathcal{N}}}^{*} \end{array} \right] \\ % \end{align} $$

When does $\mathbf{A}^{*} = \mathbf{A}^{+}$? That is, when does $$ % V \left[ \begin{array}{cc} \color{blue}{\mathbf{V}_{\mathcal{R}}} & \color{red}{\mathbf{V}_{\mathcal{N}}} \end{array} \right] % Sigma \left[ \begin{array}{cc} \mathbf{S} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{array} \right] % U* \left[ \begin{array}{c} \color{blue}{\mathbf{U}_{\mathcal{R}}}^{*} \\ \color{red}{\mathbf{U}_{\mathcal{N}}}^{*} \end{array} \right] % = % V \left[ \begin{array}{cc} \color{blue}{\mathbf{V}_{\mathcal{R}}} & \color{red}{\mathbf{V}_{\mathcal{N}}} \end{array} \right] % Sigma \left[ \begin{array}{cc} \mathbf{S}^{-1} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{array} \right] % U* \left[ \begin{array}{c} \color{blue}{\mathbf{U}_{\mathcal{R}}}^{*} \\ \color{red}{\mathbf{U}_{\mathcal{N}}}^{*} \end{array} \right] ? % $$

The equality is satisfied when $$ \mathbf{S} = \mathbf{S}^{-1} = \mathbf{I}_{\rho} $$

References:

  1. Pseudoinverse - Interpretation
  2. What forms does the Moore-Penrose inverse take...
  3. Singular value decomposition proof
  4. What is the SVD of A−1?
  5. Pseudo Inverse Solution for Linear Equation System Using the SVD
dantopa
  • 10,342
  • What you are saying is for the case of square matrices. What the OP is referring to is when matrix is not square and you need to rely on the pseudoinverse. – Michael May 04 '18 at 13:23
  • @Michael. The analysis here is for the most general case where $m\neq n$. That is, there is no restriction the number of rows must equal the number of columns. The matrix may take arbitrary form. – dantopa Feb 09 '22 at 22:38