Suppose I have a $n\times n$ matrix to which I add a row of zeros and a column of zeros, somewhere in the matrix, to make it $(n+1)\times (n+1)$. When I multiply the matrix by itself, or multiply it with other matrices where I have inserted a row and column of zeros in the same way, it seems to behave as though the extra row/column were not there. Is there a way to think about the added row/column, and why it does not affect the product? What have I done to this matrix? I think (but not sure at all) that it is like I have put my matrix in a higher matrix space but it only spans the subset where it previously existed?
-
When you say "somewhere" in the matrix, what do you mean? Some ways of inserting rows and columns of zeros get results different from what you describe, so I suspect you have some extra rules that you are following that you have not described. – David K May 03 '17 at 22:18
-
@DavidK i thought that the placement of the null row/column is irrelevant? – Meep May 04 '17 at 12:32
-
If you put the nulls in row $k$ and column $m$ where $k=m$ then I think it works out as you say. If $k\neq m$ then you may get different results. – David K May 04 '17 at 12:51
1 Answers
It's like being confined to a plane. The vectors all have length $2$. When you add a row of $0$s, you add a $\color{red}{null}$ space. The inhabitants of the plane can't see any change because the addition is a $\color{red}{null}$ space.
No null spaces $$ \mathbf{A} = \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \\ \end{array} \right) \in\mathbb{C}^{2\times2}_{2} $$ Life is good. The linear system $$ \mathbf{A} x = b $$ always has a unique solution $$ x = b $$ Add a null space
Add a row of $0$s. $$ \mathbf{A} = \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \\ 0 & 0 \\ \end{array} \right) \in\mathbb{C}^{3\times2}_{2} $$ Life isn't so good. We have solvability conditions based on the subspace decomposition of the data $$ b = \left( \begin{array}{c} b_{1} \\ b_{2} \\ b_{3} \\ \end{array} \right) = \color{blue}{b_{\mathcal{R}}} + \color{red}{b_{\mathcal{N}}} = \color{blue}{\left( \begin{array}{c} b_{1} \\ b_{2} \\ 0 \\ \end{array} \right)} + \color{red}{\left( \begin{array}{c} 0 \\ 0 \\ b_{3} \\ \end{array} \right)} $$
- If $b_{1} = b_{2} = 0$, there is no solution.
- If $b_{1} \ne 0$, or $b_{2} \ne 0$, and $b_{3} = 0$ a unique solution exists.
- If $b_{1} \ne 0$, or $b_{2} \ne 0$, and $b_{3} = 0$ there are an infinite number of solutions.
Fundamental Theorem of Linear Algebra
The four fundamental subspaces for an arbitrary matrix $\mathbf{A}\in\mathbb{C}^{m\times n}_{\rho}$ can be expressed as $$ \begin{align} % \mathbf{C}^{n} = \color{blue}{\mathcal{R} \left( \mathbf{A}^{*} \right)} \oplus \color{red}{\mathcal{N} \left( \mathbf{A} \right)} \\ % \mathbf{C}^{m} = \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} \oplus \color{red} {\mathcal{N} \left( \mathbf{A}^{*} \right)} % \end{align} $$ The game you are playing involves one of these two themes: $$ \mathbf{C}^{n} = \color{blue}{\mathcal{R} \left( \mathbf{A}^{*} \right)} \quad \Rightarrow \quad \mathbf{C}^{n} = \color{blue}{\mathcal{R} \left( \mathbf{A}^{*} \right)} \oplus \color{red}{\mathcal{N} \left( \mathbf{A} \right)} \\ $$ or $$ \mathbf{C}^{m} = \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} \quad \Rightarrow \quad \mathbf{C}^{m} = \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} \oplus \color{red}{\mathcal{N} \left( \mathbf{A}^{*} \right)} \\ $$
Explore MSE
Existence and uniqueness in terms of the null spaces: Why does $A^{T}Ax=A^{T}b have infinitely many solution algebraically when A has dependent columns?
Exact solution of overdetermined linear system
Adding the null spaces and seeing how life changes:Pseudo-inverse of a matrix that is neither fat nor tall?

- 10,342