Prove that the rank of a system of vectors from $E^n$ does is not bigger than the dimension of the vectors. For example the vectors $a,b,c$ are from $E^n$ so each of them has $n$ components (the vector $a=(a_{11},a_{21},...,a_{n1})$) so the rank $r$,of this system is not bigger than $n$,it is $r\le n$. I need to prove this. I was thinking that maybe I can prove that the max number of linear dependent vectors in $E^n$ is $n$, so the rank can not be bigger than $n$, but still this is not a theorem.
-
Welcome to our site! As for your question, it is probably already answered in : http://math.stackexchange.com/questions/332908/looking-for-an-intuitive-explanation-why-the-row-rank-is-equal-to-the-column-ran/583853#583853 – kjetil b halvorsen Feb 08 '15 at 18:45
-
You can write the vectors as the columns of a matrix, and then apply that post! – kjetil b halvorsen Feb 08 '15 at 18:50
2 Answers
$\text{Claim: }$ It is not possible to have more than $n$ linearly independent vectors in $E^n$.
$\text{Proof: }$ For a contradiction, suppose we have $m>n$ linearly independent vectors in $E^n$. Let $A$ be the $n \times m$ matrix formed by using these vectors as its columns. Then the rank of $A$ is $m$ by definition. However, $A$ is a $n \times m$ matrix, so its row echelon form can have at most $n$ pivot columns. This implies that its rank is less than or equal to $n$. That is, $m<n$. This is a contradiction, so we conclude that we cannot have more than $n$ linearly independent vectors in $E^n$.
As you stated in your question, the statement you wish to prove follows from the above result. If you have any questions feel free to post them in the comments. Hope this helps!

- 712
-
No need to apologize :) Have you heard of row echelon form? The proof is still valid if you replace "reduced row echelon form" with "row echelon form". I'll edit my answer accordingly. If you haven't heard of row echelon form, then I'll be happy to explain. – Gecko Feb 08 '15 at 19:18
-
No problem. Have a look at this article: http://en.wikipedia.org/wiki/Row_echelon_form. Row echelon form is defined in the first section, and an example of a matrix in row echelon form is given. If you have any questions, don't hesitate to ask. – Gecko Feb 08 '15 at 20:35
Not a proof, but a simple example that I hope can clarify the problem. I use vectors in $\mathbb{R}^2$ to make more simple the calculations. Given two not null vectors $$\vec a= \left[ \begin {array}{ccccc} a_1 \\ a_2 \end {array} \right] $$ and $$\vec b= \left[ \begin {array}{ccccc} b_1 \\ b_2 \end {array} \right] $$ suppose that the $a_1 \ne 0$ and $b_1 \ne 0$ ( since the vectors are not null at least one component must be not null). What means that they are linearly dependent? That we can find two numbers $x ,y$ not all zero, such that $ x\vec a+y\vec b=0$, i.e. $$ x \left[ \begin {array}{ccccc} a_1 \\ a_2 \end {array} \right] +y \left[ \begin {array}{ccccc} b_1 \\ b_2 \end {array} \right]= \left[ \begin {array}{ccccc} xa_1+yb_1 \\ xa_2+yb_2 \end {array} \right]=0 $$ this is equivalent to the system: $$ \begin{cases} a_1x+b_1y=0\\ a_2x+b_2y=0 \end{cases} $$ solving the first equation for $y$, substituting in the second equation and solving for $x$ you find: $$ \begin{cases} y=-\dfrac{a_1x}{b_1}\\ x(a_2b_1-b_2a_1)=0 \end{cases} $$ so you see that if $(a_2b_1-b_2a_1)\ne 0$ you must have $x=y=0$ , so:
$$ (a_1b_2-b_1a_2)\ne 0 $$
is the condition for the two vectors to be linearly independent.
Now suppose that $\vec a$ and $\vec b$ be linearly independent, I want to show that for any third vector $$ \vec c= \left[ \begin {array}{ccccc} c_1 \\ c_2 \end {array} \right] $$ with $c_1 \ne 0$ the three vectors $\vec a,\vec b,\vec c$ are linearly dependent. For this we have to show that there exists three numbers $x,y,z$, (not all $=0$) such that $x\vec a+y\vec b+z\vec c=0$. And this give the system: $$ \begin{cases} a_1x+b_1y+c_1z=0\\ a_2x+b_2y+c_2z=0 \end{cases} $$ solving the first for $x$, substituting and solving for $y$ we find: $$ \begin{cases} x=-\dfrac{c_1z+b_1y}{a_1}\\ y=\dfrac{z(a_1c_2-a_2c_1)}{a_2b_1-a_1b_2} \end{cases} $$ if $(a_1c_2-a_2c_1) \ne0$ the vectors $\vec a$ and $\vec c$ are linearly independent and in this case, for $z\ne 0$ we can always find $y \ne 0$ and $y \ne 0$ and the identity defining linear dependence of the three vectors is satisfied. Otherwise $\vec a$ and $\vec c$ are linearly dependent .
As noted at the beginning this is not a proof. To extend this reasoning to vectors in $\mathbb{R}^n$ we needs the properties of matrices as show in the answer of Gecko, but the meaning of those properties is not so different.

- 62,675
-
Your use of notation is a bit confusing ( learn Latex! and be careful to indices). The main concept is that a set of $n$ linearly independent vectors in $\mathbb{R}^n$ is a basis in $\mathbb{R}^n$ (not ortho-normal in general). If the $n$ vectors are linearly dependent then the set may contain a sub-basis of $m < n$ linearly independent vectors. – Emilio Novati Feb 09 '15 at 16:36
-