3

I'm trying to solve the following problem:


Suppose that $K$ is an infinite field, and $V$ is a vector space over $K$. Show that it is not possible to write $V=\bigcup_{i=1}^{n}U_i$, where $U_1,...,U_n$ are proper linear subspaces.


This is exercise 1.17 from the book "A Course in Galois Theory" by Garling. I have found an argument that I believe is incorrect, but I cannot identify the error. I used two results from the book (keeping the numbering and statement identical to the book for reference):


Theorem 1.1. Suppose that $A$ is a finite subset of a vector space $V$ over $K$ that spans $V$, and that $C$ is a linearly independent subset of $A$ ($C$ may be empty). There exists a basis $B$ of $V$ with $C\subseteq B\subseteq A$.

Theorem 1.4. Suppose that $U$ is a linear subspace of a finite-dimensional vector space $V$ over $K$. Then $\dim U\leqslant\dim V$, and $\dim U=\dim V$ if and only if $U=V$.


My argument is as follows:

Proof. Assume that $V=\bigcup_{i=1}^{n}U_i$, where $U_1,...,U_n$ are subspaces of $V$. We can assume that no $U_i$ is contained in the union of the other subspaces. In this case, we can find elements $u_i\in U_i-\bigcup_{j\neq i}U_j$. Consider the subspace $U=\text{span}(u_1,...,u_n)$ of $V$. Taking the set $A$ from the statement of Theorem 1.1 to be {$u_1,...,u_n$}, we see that $A$ contains a basis of $U$, and therefore $U$ is a finite-dimensional $K$-vector space. Since $U\subseteq V=\bigcup_{i=1}^{n}U_i$, we must have $U=(U\cap U_1)\cup...\cup(U\cap U_n)$. Thus, we can write $U$ as a finite union of subspaces of $U$.

Assume that the result holds for finite-dimensional vector spaces over $K$. In this case, since $\dim U<\infty$, the result holds, and from $U=(U\cap U_1)\cup...\cup(U\cap U_n)$, we see that at least one of the subspaces $U\cap U_i$ is not a proper subspace of $U$. This means that $U=U\cap U_k$ for some $k\in$ {$1,...,n$}, which implies $U\subseteq U_k$. However, this is absurd since, in this case, $u_i\in U\subseteq U_k$ for every $i\neq k$, contradicting our choice of the elements $u_i$. Thus, it will be sufficient to prove the result for finite-dimensional $K$-vector spaces.

Suppose that $W=\bigcup_{i=1}^{m}W_i$ is a vector space over $K$, where each $W_i$ is a proper subspace of $W$, and $\dim W<\infty$. Let's say $\dim W=r$, and let $B$ = {$b_1,...,b_r$} be a basis for $W$. Clearly, every element of $W_i$ is generated by $B$, so by Theorem 1.1, there exists a basis $B_i$ for $W_i$ contained in $B$. Since each $W_i$ is a proper subspace of $W$, we get from Theorem 1.4 that $|B_i|=\dim W_i<\dim W=r=|B|$. Consequently, $B_i$ is a proper subset of $B$.

Now, consider the element $w=b_1+...+b_r$ in $W$. Since $W=\bigcup_{i=1}^{m}W_i$, we must have $b_1+...+b_r\in W_k$ for some $k\in$ {$1,...,n$}. The subspace $W_k$ is generated by the proper subset $B_k$ of $B$. This means that $w$ can be written as a linear combination of the elements of $B_k$. Since $B_k\neq B$, this combination cannot be equal to $b_1+...+b_r$, as this combination has $r$ nonzero terms, whereas $w\in W_k=\text{span}(B_k)$ implies that $w$ can be written as a sum of at most $|B_k|<r$ nonzero scalar multiples of elements of $B$. In other words, we have two distinct representations for $w$ in terms of the vectors in the basis $B$. Subtracting both representations, we get a nontrivial linear combination for the zero vector in $W$ using basis elements, contradicting the linear independence in $B$. Having reached a contradiction, we can conclude that $W$ cannot be written as a union of proper subspaces, which proves the statement for finite-dimensional $K$-vector spaces. $\Box$

The statement is false for vector spaces over finite fields. Therefore, the assumption that $K$ is infinite is necessary for the proof. This means that either I am implicitly using this fact at some point, or there is an error in the argument. In either case, I cannot find where this occurs.

I am aware that there are other solutions to this same problem here (for example, this one). I am only interested in identifying where the assumption mentioned above is used or where the error in this argument lies. Thank you in advance for any tips or comments!

PS: This is my first question here, and English is not my native language. I apologize for any formatting errors or writing mistakes in the question.

Arturo Magidin
  • 398,050
  • 1
    You cannot apply Theorem 1.1 as you do in the third paragraph of the proof to assert that there is a basis $B_i$ of $W_i$ contained in $B$, because $B$ is not assumed to be a subset of $W_i$ (in fact, it cannot possibly be a subset if $W_i$ is a proper subspace of $W$). So there's definitely an error there. – Arturo Magidin Aug 03 '23 at 02:39
  • Thank you so much! I really thought there was something simple that I was missing. – Matheus Frota Aug 03 '23 at 02:58

1 Answers1

1

(To ensure this does not remain unanswered)

The error in your argument is in paragraph 3. You say:

Suppose that $W=\bigcup_{i=1}^{m}W_i$ is a vector space over $K$, where each $W_i$ is a proper subspace of $W$, and $\dim W<\infty$. Let's say $\dim W=r$, and let $B$ = {$b_1,\ldots,b_r$} be a basis for $W$. Clearly, every element of $W_i$ is generated by $B$, so by Theorem 1.1, there exists a basis $B_i$ for $W_i$ contained in $B$.

This is not a valid application of Theorem 1.1. Note that Theorem 1.1. says that if you have a spanning subset $A$ of $V$, and a linearly independent subset $B$ of $A$, then there is a basis $C$ for $V$ with $B\subseteq C\subseteq A$. You are trying to apply this to the spanning set $B$, the linearly independent subset $\varnothing$, and the space $W_i$. But $B$ is not known to be a subset of $W_i$, so you cannot apply the theorem.

In fact, if you had $B\subseteq W_i$, then you would have $W=\mathrm{span}(B)\subseteq W_i$, which would imply that $W_i$ is not a proper subspace of $W$. So in fact we know that $B$ is not a subset of $W_i$, and therefore that you cannot apply Theorem 1.1 as you attempt to do.

Arturo Magidin
  • 398,050