5

A subset $S$ of a vector space $V$ is said to be linearly dependent if there exist a finite number of distinct vectors $x_1, \ldots , x_n$ in $S$ and scalars $a_1 , \ldots ,a_n$ not all zero, such that $$ a_1 x_1 + \cdots + a_n x_n =0 $$

I want to translate this definition into logical formula and find the formula of linearly independence.

Here is what I tried.

$$ \exists [ x_1 ,\ldots , x_n \in S : x_i \not = x_j] \exists [a_k \in F : a_k \not = 0] : a_1 x_1+\cdots + a_n x_n = 0 $$

Now to obtain the definition of linearly independence, I deny the above.

$$ \forall [x_1 ,\ldots , x_n \in S : x_i = x_j] \forall [a_k \in F : a_k = 0] : a_1 x_1+\cdots + a_n x_n \not =0 $$

This is obviously absurd.

What is wrong and what is the exact translation?

BrianO
  • 16,579
cokecokecoke
  • 1,195

1 Answers1

7

Until you're more used to doing this sort thing, you should write out the original statement explicitly before you negate it. The definition of linear dependence is:

$S$ is linearly dependent iff there are finitely many vectors $x_1, \dotsc,x_n\in S$ (all distinct) and scalars $a_1,\dotsc,a_n \in F$ such that: the scalars are not all zero, and $\sum_{i=i}^n a_i x_i = 0$.

The negation of that, sticking to mathematical English, is:

$S$ is linearly independent iff for every set of finitely many vectors $x_1, \dotsc,x_n\in S$ (all distinct) and scalars $a_1,\dotsc,a_n \in F$: if the scalars are not all zero, then $\sum_{i=i}^n a_i x_i \ne 0$.

Note that "if the scalars are not all zero, then $\sum_{i=i}^n a_i x_i \ne 0$" is equivalent to its contrapositive: "if $\sum_{i=i}^n a_i x_i = 0$, then the scalars are all zero", which is the more common way of defining linear independence.

More formally, here's linear dependence: $$\begin{equation}\begin{split} (\exists x_1, \dotsc,x_n\in S)[&\bigwedge_{1\le i<j\le n} x_i\ne x_j \\ &\wedge (\exists a_1,\dotsc,a_n\in F)(&\bigvee_{1\le i\le n}a_i\ne 0) \\ &&\wedge \sum_{1\le i\le n}a_i x_i = 0 \\ ] \end{split}\end{equation}$$ The form of this statement is: $$ (\exists \vec{x} \in S)[A(\bar x) \wedge (\exists\vec{a}\in F)(B(\vec{a})\wedge C(\vec{a}, \vec{x}))] \tag{*} $$ where

  • $A(\vec{x})$ is the big conjunction saying that the $x_i$ are all distinct,
  • $B(\vec{a})$ is the big disjunction saying that some $a_i$ is not zero, and
  • $C(\vec{a},\vec{x})$ says that the sum of the products $a_i x_i$ is zero.

Negating (*) gives: $$ (\forall \vec{x} \in S)[A(\bar x) \to (\forall\vec{a}\in F)(B(\vec{a})\to \neg C(\vec{a}, \vec{x}))] \tag{$\neg$* 1} $$ Rearranging that and using the contrapositive gives: $$ (\forall \vec{x} \in S)[A(\bar x) \to (\forall\vec{a}\in F)(C(\vec{a}, \vec{x})\to \neg B(\vec{a}))] \tag{$\neg$* 2} $$ or, finally, $$ (\forall \vec{x} \in S)(\forall\vec{a}\in F)[(A(\bar x) \wedge C(\vec{a}, \vec{x}))\to \neg B(\vec{a})] \tag{$\neg$* 3} $$ This last is the familiar standard form expressing linear independence ($\neg B(\vec{a})$ say that all the $a_i$ are zero).

BrianO
  • 16,579
  • Hi. Can you please help me understand why is it $\forall a_i \ne 0$ in the expression in linear dependence definition after "More formally, here's linear dependence"? I think that should have been $\exists i $ such that $a_i\ne 0$ – Koro Jul 22 '21 at 07:31
  • @Koro Just noticing your question. The clause in question does not use \forall it uses \bigwedge (disjunction, 'or', not 'and' as in universal quantification), and the index is $i$ — it says just what you wish it to say. – BrianO Jan 08 '23 at 04:44