1

We are learning about pseudoinverses using the Strang book and I am just confused as to how to interpret the pseudoinverse.

How come $\mathbf{A}^{+}\mathbf{A}$ projects into row space and $\mathbf{A}\mathbf{A}^{+}$ projects into column space? What does it mean when it says that $\mathbf{A}^{+}$ takes a matrix from the column space to the row space? Is it because if $\mathbf{A}x=b$, $\mathbf{A}^{+}$ produces $\mathbf{A}^{+}b = x$?

Would appreciate any help - thanks!

dantopa
  • 10,342
maln
  • 11
  • You may find some help here: Least Square with homogeneous solution! (https://math.stackexchange.com/questions/578376/least-square-with-homogeneous-solution/2256428#2256428) and Least squares solutions and the orthogonal projector onto the column space (https://math.stackexchange.com/questions/2033896/least-squares-solutions-and-the-orthogonal-projector-onto-the-column-space/2180194#2180194) – dantopa Dec 04 '18 at 04:13
  • See the “Projectors” section of https://en.m.wikipedia.org/wiki/Moore–Penrose_inverse – amd Dec 04 '18 at 05:32
  • @amd Yeah I looked at that, but they just state it rather than explain how that works. – maln Dec 04 '18 at 05:50
  • Actually, they do explain how it works via the identities in the first two paragraphs of that section. The first paragraph shows that they are indeed orthogonal projectors, and for instance $PA=A$ says that $P$ is the identity map on the range of $A$. – amd Dec 04 '18 at 06:10

1 Answers1

6

The Singular Value Decomposition

The singular value decomposition of a matrix is $$ \begin{align} \mathbf{A} &= \mathbf{U} \, \Sigma \, \mathbf{V}^{*} \\ % &= % U \left[ \begin{array}{cc} \color{blue}{\mathbf{U}_{\mathcal{R}}} & \color{red}{\mathbf{U}_{\mathcal{N}}} \end{array} \right] % Sigma \left[ \begin{array}{cccc|cc} \sigma_{1} & 0 & \dots & & & \dots & 0 \\ 0 & \sigma_{2} \\ \vdots && \ddots \\ & & & \sigma_{\rho} \\\hline & & & & 0 & \\ \vdots &&&&&\ddots \\ 0 & & & & & & 0 \\ \end{array} \right] % V \left[ \begin{array}{c} \color{blue}{\mathbf{V}_{\mathcal{R}}}^{*} \\ \color{red}{\mathbf{V}_{\mathcal{N}}}^{*} \end{array} \right] \\ % & = % U \left[ \begin{array}{cccccccc} \color{blue}{u_{1}} & \dots & \color{blue}{u_{\rho}} & \color{red}{u_{\rho+1}} & \dots & \color{red}{u_{n}} \end{array} \right] % Sigma \left[ \begin{array}{cc} \mathbf{S}_{\rho\times \rho} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{array} \right] % V \left[ \begin{array}{c} \color{blue}{v_{1}^{*}} \\ \vdots \\ \color{blue}{v_{\rho}^{*}} \\ \color{red}{v_{\rho+1}^{*}} \\ \vdots \\ \color{red}{v_{n}^{*}} \end{array} \right] % \end{align} $$ The color blue denotes range space objects; the color red, null space. The connection to the fundamental subspaces is direct: $$ \begin{array}{ll} % column \ vectors & span \\\hline % \color{blue}{u_{1}} \dots \color{blue}{u_{\rho}} & \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} \\ % \color{blue}{v_{1}} \dots \color{blue}{v_{\rho}} & \color{blue}{\mathcal{R} \left( \mathbf{A}^{*} \right)} \\ % \color{red}{u_{\rho+1}} \dots \color{red}{u_{m}} & \color{red}{\mathcal{N} \left( \mathbf{A}^{*} \right)} \\ % \color{red}{v_{\rho+1}} \dots \color{red}{v_{n}} & \color{red}{\mathcal{N} \left( \mathbf{A} \right)} \\ % \end{array} $$ The vectors $\color{blue}{\{u_{k}\}}_{k=1}^{\rho}$, the column vectors of $\color{blue}{\mathbf{U}_{\mathcal{R}}}$, represent an orthonormal span of the row space. Similarly, the vectors $\color{blue}{\{v_{k}\}}_{k=1}^{\rho}$ span the column space.

The Moore-Penrose Pseudoinverse Matrix

The Moore-Penrose pseudoinverse matrix arises naturally (Singular value decomposition proof ) from using the SVD to solve the least square problem:

$$ \begin{align} \mathbf{A}^{+} &= \mathbf{V} \, \Sigma^{+} \mathbf{U}^{*} \\ % &= % V \left[ \begin{array}{cc} \color{blue}{\mathbf{V}_{\mathcal{R}}} & \color{red}{\mathbf{V}_{\mathcal{N}}} \end{array} \right] % Sigma \left[ \begin{array}{cc} \mathbf{S}^{-1} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{array} \right] % % U \left[ \begin{array}{c} \color{blue}{\mathbf{U}_{\mathcal{R}}}^{*} \\ \color{red}{\mathbf{U}_{\mathcal{N}}}^{*} \end{array} \right] \\ % \end{align} % $$

The Fundamental Projectors

The four fundamental unitary projectors are $$ \begin{align} % \mathbf{P}_\color{blue}{\mathcal{R}\left( \mathbf{A} \right)} &= \mathbf{A}\mathbf{A}^{\dagger} & % \mathbf{P}_\color{red}{\mathcal{N}\left( \mathbf{A}^{*} \right)} &= \mathbf{I}_{m} - \mathbf{A}\mathbf{A}^{\dagger} \\ % \mathbf{P}_\color{blue}{\mathcal{R}\left( \mathbf{A}^{*} \right)} &= \mathbf{A}^{\dagger}\mathbf{A} & % \mathbf{P}_\color{red}{\mathcal{N}\left( \mathbf{A} \right)} &= \mathbf{I}_{n} - \mathbf{A}^{\dagger}\mathbf{A} \\ % \end{align} $$

Projection onto $\color{blue}{\mathcal{R}\left( \mathbf{A} \right)}$

Using the decomposition for the target matrix and the concomitant pseudoinverse produces

$$\mathbf{P}_\color{blue}{\mathcal{R}\left( \mathbf{A} \right)} = \mathbf{A}\mathbf{A}^{\dagger} = \left( \left[ \begin{array}{cc} \color{blue}{\mathbf{U}_{\mathcal{R}}} & \color{red}{\mathbf{U}_{\mathcal{N}}} \end{array} \right] \left[ \begin{array}{cc} \mathbf{S} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{array} \right] \left[ \begin{array}{c} \color{blue}{\mathbf{V}_{\mathcal{R}}}^{*} \\ \color{red}{\mathbf{V}_{\mathcal{N}}}^{*} \end{array} \right] \right) \left( \left[ \begin{array}{cc} \color{blue}{\mathbf{V}_{\mathcal{R}}} & \color{red}{\mathbf{V}_{\mathcal{N}}} \end{array} \right) \left[ \begin{array}{cc} \mathbf{S}^{-1} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{array} \right] \left[ \begin{array}{c} \color{blue}{\mathbf{U}_{\mathcal{R}}}^{*} \\ \color{red}{\mathbf{U}_{\mathcal{N}}}^{*} \end{array} \right] \right) = \color{blue}{\mathbf{U}_{\mathcal{R}}} \color{blue}{\mathbf{U}_{\mathcal{R}}}^{*} $$ The columns of the matrix $\color{blue}{\mathbf{U}}$ are an orthogonal span of the column space of $\mathbf{A}$.

Projection onto $\color{blue}{\mathcal{R}\left( \mathbf{A}^{*} \right)}$

Similar machinations will reveal $$ \mathbf{P}_\color{blue}{\mathcal{R}\left( \mathbf{A}^{*} \right)} = \color{blue}{\mathbf{V}_{\mathcal{R}}} \color{blue}{\mathbf{V}_{\mathcal{R}}}^{*} $$

dantopa
  • 10,342