4

I have

A= $\begin{bmatrix}1&1&-3\\0&2&1\\1&-1&-4\end{bmatrix}$

I row reduce it to

$\begin{bmatrix}1 &0& -3.5\\0&1&.5\\0&0&0\end{bmatrix}$

How do I find col(A) from the above info? Is it the pivot points correspond to the columns?

So col(A) would be $\begin{bmatrix}1\\0\\1\end{bmatrix}$and $\begin{bmatrix}1\\2\\-1\end{bmatrix}$

and for null(A) I got

$\begin{bmatrix}3.5\\-.5\\1\end{bmatrix}$

JPHamlett
  • 209
  • 1
    A small, but important omission: Those vectors are not $\operatorname{col}(A)$ and $\operatorname{null}(A)$. The vector spaces spanned by those vectors are. Alternatively: "A basis for $\operatorname{col}(A)$ would be...". It is important that the phrasing of your answer matches what they ask for. I would also consider writing $\left[\begin{smallmatrix}7\-1\2\end{smallmatrix}\right]$ instead of what you had for $\operatorname{null}(A)$, but that is just aesthetics, and not important at all. – Arthur Nov 10 '15 at 18:09

3 Answers3

1

Your answer and process seem correct. That is, the vectors $(1,0,1)$ and $(1,2,-1)$ form a basis of the column space, while the vector $(3.5,-.5,1)$ forms a basis of the kernel.

Ben Grossmann
  • 225,327
0

Normally, you should column reduce to find a basis for the column space, or what amounts to the same, row-reduce the transpose matrix: $$\begin{bmatrix} 1&0&1\\1&2&-1\\-3&1&-4\end{bmatrix}\rightsquigarrow\begin{bmatrix} 1&0&1\\0&2&-2\\0&1&-1\end{bmatrix}\rightsquigarrow\begin{bmatrix} 1&0&1\\0&1&-1\\0&0&0\end{bmatrix}$$ This proves the third column is a linear combination of the first two. Hence the first two column vectors are a basis of the column space.

Your method is also valid, but it is a indirect method, as it uses the fact that the row rank and the column rank of a matrix are equal.

Bernard
  • 175,478
  • it does not necessarily use this fact, as it say. What it is really using is that we can always select a subset of the columns of a matrix to form a basis of their span, and that the relations on columns are preserved through row-reduction. – Ben Grossmann Nov 10 '15 at 18:38
  • Youi're right, but in any case, the justification is less direct than row-reducing the transposed matrix (which gives you the linear relations between the columns if you want them). – Bernard Nov 10 '15 at 18:41
  • Hmmm... I think you've convinced me – Ben Grossmann Nov 10 '15 at 19:13
0

To clear up confusion, work out the steps.

  1. Form the augmented matrix $$ % A I \left[ \begin{array}{c|c} \mathbf{A} & \mathbf{I}_{3} \\ \end{array} \right] % = % \left[ \begin{array}{rcr|ccc} 1 & 0 & 1 & 1 & 0 & 0 \\ 1 & 2 & -1 & 0 & 1 & 0 \\ -3 & 1 & -4 & 0 & 0 & 1 \\ \end{array} \right] $$
  2. Clear column 1. $$ % E \left[ \begin{array}{rcc} 1 & 0 & 0 \\ -1 & 1 & 0 \\ 3 & 0 & 1 \\ \end{array} \right] % in \left[ \begin{array}{rcr|ccc} 1 & 0 & 1 & 1 & 0 & 0 \\ 1 & 2 & -1 & 0 & 1 & 0 \\ -3 & 1 & -4 & 0 & 0 & 1 \\ \end{array} \right] = % out \left[ \begin{array}{ccr|rcc} 1 & 0 & 1 & 1 & 0 & 0 \\ 0 & 2 & -2 & -1 & 1 & 0 \\ 0 & 1 & -1 & 3 & 0 & 1 \\ \end{array} \right] % $$
  3. Clear column 2. $$ % E \left[ \begin{array}{crc} 1 & 0 & 0 \\ 0 & \frac{1}{2} & 0 \\ 0 & -\frac{1}{2} & 1 \\ \end{array} \right] % in \left[ \begin{array}{ccr|rcc} 1 & 0 & 1 & 1 & 0 & 0 \\ 0 & 2 & -2 & -1 & 1 & 0 \\ 0 & 1 & -1 & 3 & 0 & 1 \\ \end{array} \right] = % out \left[ \begin{array}{ccr|rrc} 1 & 0 & 1 & 1 & 0 & 0 \\ 0 & 1 & -1 & -\frac{1}{2} & \frac{1}{2} & 0 \\ 0 & 0 & 0 & \frac{7}{2} & -\frac{1}{2} & 1 \\ \end{array} \right] % $$

The $$ \begin{align} % \left[ \begin{array}{c|c} \mathbf{A} & \mathbf{I}_{3} \\ \end{array} \right] &= % \left[ \begin{array}{rcr|ccc} 1 & 0 & 1 & 1 & 0 & 0 \\ 1 & 2 & -1 & 0 & 1 & 0 \\ -3 & 1 & -4 & 0 & 0 & 1 \\ \end{array} \right] \\ % &\qquad \qquad \qquad \Downarrow \\ % \left[ \begin{array}{c|c} \mathbf{E_{A}} & \mathbf{R} \\ \end{array} \right] &= % \left[ \begin{array}{ccr|rrc} \boxed{1} & 0 & 1 & 1 & 0 & 0 \\ 0 & \boxed{1} & -1 & -\frac{1}{2} & \frac{1}{2} & 0 \\\hline 0 & 0 & 0 & \color{red}{\frac{7}{2}} & \color{red}{-\frac{1}{2}} & \color{red}{1} \\ \end{array} \right] % \end{align} $$ The unit pivots (boxed) in the matrix $\mathbf{E_{A}}$ identifies the fundamental columns of the images. The red vector in $\mathbf{R}$ is the span of the null space:resolution $$ \boxed{ \color{blue}{\text{col } \mathbf{A}} \oplus \color{red} {\text{null } \mathbf{A}} = \color{blue}{\mathcal{R}\left( \mathbf{A}\right)} \oplus \color{red} {\mathcal{N}\left( \mathbf{A}^{*}\right)} = % \color{blue} { \text{span } \left\{ \, \left[ \begin{array}{r} 1 \\ 1 \\ -3 \\ \end{array} \right], % \left[ \begin{array}{r} 0 \\ 2 \\ 1 \\ \end{array} \right] \, \right\}} % % % \oplus % % % % \color{red} { \text{span } \left\{ \, \left[ \begin{array}{r} \frac{7}{2} \\ -\frac{1}{2} \\ 1 \\ \end{array} \right] \, \right\}}} $$

dantopa
  • 10,342