0

In the book of Linear Algebra By Werner Greub at page 100, it is given that;

Let $\phi$ a p-linear map from E to F.Then the following statements are equivalent. ... (i) $\phi$ is skew symmetric

(ii) $\phi(x_1, ... ,x_p)= 0$ whenever $x_i = x_j$ for some pair $i\not = j$

(iii) $\phi(x_1, ... ,x_p)= 0$ whenever the vectors $x_1, ... ,x_p$ are linearly dependent.

My question is if we assume that $(ii)$ true, the vectors $x_1, ... ,x_p$ have be linearly dependent since $x_i = 1_E x_j$, so (iii) is a direct conclusion of $(ii)$, but in the book, it chooses $p$ without lost of generality and writes it as a linear combination of the vectors $x_1, ... ,x_{p-1}$, but I didn't get it. I mean there is nothing wrong in that proof, but I think it is unnecessary because there is no way that (ii) is true and the vector $x_1, ... ,x_p$ are linearly independent.

Edit:

What I didn't get it is that when $ii$ is assumed, is there any way that $iii$ can be false ?

Our
  • 7,285

1 Answers1

1

The problem is that the statements (ii) and (iii) have quantifiers in them : the word "whenever" must be understood as "for all $x_1,\dots,x_p$ such that...".

Condition (ii) tells you only what happens when two vectors in $x_1,\dots,x_p$ are equal, and doesn't tell you anything about what happens when $x_1,\dots,x_p$ are only linearly dependent; (ii) is only a special case of (iii), and that's why you need a proof.


Let us make this more abstract. Let us give name to some affirmations :

  • $A $ will denote the affirmation "$\phi(x_1,\dots ,x_p)=0$"
  • $B$ will denote the affirmation "$x_i=x_j$ for some $i\neq j$"
  • $C$ will denote the affirmation "$x_1,\dots ,x_p$ are linearly dependent".

Then $(ii)$ is the affirmation $$\forall x_1,\dots ,x_p\, (B\Rightarrow A)$$ and $(iii)$ is the affirmation $$\forall x_1,\dots ,x_p \, (C\Rightarrow A).$$ Moreover, $B\Rightarrow C$; so if $(iii)$ holds then we have for all $x_1,\dots ,x_p$ $$B\Rightarrow C\Rightarrow A,$$ and thus $(ii)$ holds (this is why I said that $(ii)$ was a special case of $(iii)$). So the proof of $(iii)\Rightarrow (ii)$ is simple.

But if you only know $(ii)$, then you have $B\Rightarrow A$ and $B\Rightarrow C$, but you can't deduce that $C\Rightarrow A$. Doing so would just be a bad syllogism, which is a frequent logical mistake.

Arnaud D.
  • 20,884
  • So the catch is when we go from $(iii)$ to $(ii).$ – Our Jun 30 '17 at 08:22
  • @Leth I'm not sure I see what you mean. In fact $(iii)\Rightarrow (ii)$ is straightforward, but $(ii)\Rightarrow (iii)$ is not, you need to use the $p$-linearity. – Arnaud D. Jun 30 '17 at 08:30
  • If $x_1,\ldots,x_p$ are linear dependent, then $x_1,\ldots, x_i,\ldots,x_i,\ldots x_p$ too – Fakemistake Jun 30 '17 at 08:37
  • @ArnaudD. How 3 --> 2 is straightforward, I'm trying to prove it for 20 minutes. – Our Jun 30 '17 at 10:21
  • @ArnaudD. Plus, there is still a confusing part that if $x_i = x_j$, then the vectors $x_1, .., x_i, x_j .., x_p $ are linearly dependent, and $\phi(x_1, ..., x_p) = 0$, so the result directly follows. – Our Jun 30 '17 at 10:23
  • @ArnaudD. I mean I get it 2 is a special case of 3, but when 2 is assumed, there is now way 3 is false. – Our Jun 30 '17 at 10:25
  • After thinking on it for a day, I got in the end, thanks. – Our Jul 01 '17 at 12:57