0

That Theorem 3.2 says: Every finite orthogonal set of nonzero vectors is linearly independent.

The proof is simple, but it seems to me that the finiteness is redundant, for the argument in the proof applies to an infinite set. Am I right?

The proof runs as follows: If $k > 0$ is an integer, if $a_{1}, \dots, a_{k}$ are reals, if $v_{1}, \dots, v_{k}$ are vectors, and if $$\sum_{1}^{k}a_{j}v_{j} = 0,$$ then, by taking inner product with some $v_{i}$ we have $$a_{i}(v_{i} \cdot v_{i}) = 0,$$ so that $$a_{i} = 0.$$ Since this argument holds for any $1 \leq i \leq k,$ qed.

To me, the proof above holds for any $k$.

Yes
  • 20,719

2 Answers2

1

You did not provide the proof to us, so I cannot actually answer whether or not their proof applies to infinite sets.

But it seems likely to me that your book would omit infinite dimensional spaces because they are much more complicated and, often, annoying than finite dimensional spaces. Anytime you are dealing with an infinite number of things, it is important to have careful definitions and to adhere to them.

There is a fundamental decision to be made for infinite dimensional vector spaces. What should is mean for a set of vectors to span a space, or for some set of vectors to be linearly independent? If a set $S$ of vectors is linearly independent, do we mean that there is no nontrivial finite linear combination of vectors in $S$ that sums to $0$, or do we perhaps mean that there is no nontrivial infinite linear combination summing to zero? For that matter, what does it mean to sum an infinite number of vectors?

These are good, worthwhile questions that your book likely doesn't want to entertain, at least not at first. For related concepts, check out the Schauder Basis, the Hamel Basis, and their difference. Hopefully this will give a suggestion of the subtleties and depth hiding just beneath the surface.

1

Any linearly dependent set contains by definition a finite linearly dependent set, so you are right.