3

Is there a somewhat elementary/analytic way to show that $\{x^{k_\alpha}\}$, or $\{\sin (k_\alpha x)\}$, or some similar set is linearly independent over the reals for any collection $\{k_\alpha\}$ of non-negative reals?

My goal is to show that the set of continuous functions $C[0,1]$ has uncountable Hamel dimension with elementary tools.

In the case that $k_\alpha = k\in\{0,1,2,\ldots\}$ this boils down to the Fourier basis, but I'd like to avoid using orthogonality.

cdipaolo
  • 1,146
  • 1
    Maybe it is easier with the $x \mapsto \exp(k_{\alpha}x)$. Considering a combination of $n$ terms which you assume is zero, differentiating $n$ times at zero makes a Vandermonde matrix appear and it is easier to conclude – charmd Sep 01 '18 at 21:56
  • @CharlesMadeline Ah yes that should work, thank you! – cdipaolo Sep 01 '18 at 21:58
  • @cdipaolo You can do the same with $\sin(\alpha x)$, but you need to use $n$ times $\frac{d^2}{dx^2}$ rather than $\frac{d}{dx}$. –  Sep 01 '18 at 22:01

4 Answers4

12

Consider the functions $x^{\alpha}$ on $(0,1]$ for all $\alpha\in\mathbb{R}$. Suppose they were linearly dependent, so there are distinct $\alpha_1,\dots,\alpha_n$ and nonzero scalars $c_1,\dots,c_n$ such that $$\sum c_ix^{\alpha_i}=0$$ for all $x\in (0,1]$. Clearly we must have $n>1$, and let us also pick such a relation for which $n$ is minimal. Dividing by $x^{\alpha_1}$, we may assume $\alpha_1=0$. Now differentiate the equation to get a new relation $$\sum\alpha_i c_i x^{\alpha_i-1}=0.$$ But the coefficient of $x^{\alpha_1-1}$ is now $0$ (and none of the others are), so we now have a relation with $n-1$ terms instead of $n$ terms. This contradicts the minimality of $n$ (note here that it is important that $n>1$ so $n-1>0$ and our new relation is still nontrivial).

This shows that the functions $x^\alpha$ for $\alpha\in\mathbb{R}$ are linearly independent on $(0,1]$. It follows that the functions $x^\alpha$ for $\alpha\geq 0$ are linearly independent in $C[0,1]$.

Eric Wofsey
  • 330,363
2

Another way that can be easily adjusted for greater generality: consider, for all $\varepsilon\in\left(0,\frac12\right)$, $$f_\varepsilon(x)=\begin{cases}\exp\frac{1}{(x-\varepsilon)(x-1+\varepsilon)}&\text{if }x\in(\varepsilon,1-\varepsilon)\\0 &\text{otherwise}\end{cases}$$

Then clearly any linear combination $a_1f_{\varepsilon_1}+\cdots +a_nf_{\varepsilon_n}=0$ with $\varepsilon_1<\cdots<\varepsilon_n$ must satisfy $a_1=0$, because it's the only way for it to be identically $0$ on the interval $(\varepsilon_1,\varepsilon_2)$.

1

(It seems worth it to repost my answer to the duplicate.)

$\frac{1}{x-a}$ for $a>1$ are linearly independent.

If $f(x) = \sum_{i=1}^n\frac{c_i}{x-a_i}=0$ for all $x\in[0,1]$, the argument can be finished in different ways,

(1) $f(x)$ is analytic outside $a_i$, so it has isolated zeroes unless $f(x)\equiv 0$, but if $c_i\not=0$, when $x\rightarrow a_i$, $f(x)\rightarrow\infty$, so $f(x)$ is not identically $0$.

(2) Multiply $\prod_{i}(x-a_i)$ on both sides, we get a polynomial $p(x)$ that vanishes on $[0,1]$, so it must be the zero polynomial for othwerwise it has at most finitely many zeroes. And if $c_i\not=0$, then let $x = a_i$, we have $p(a_i)= c_i\prod_{j\not=i}(a_i-a_j)\not=0$.

Just a user
  • 14,899
0

Suppose $0 < w_1< \cdots < w_n$ and let $f(x) = \sum_k a_k \sin w_k x$. Choose $x \in (0,1]$ such that $\sin w_n x \neq 0$.

Let$(Ag)(x) = - {1 \over w_n^2} g''(x)$. Note that $A^m f = 0$, and $(A^m f )(x) = \sum_{k<n} a_k ({w_k \over w_n})^{2m} \sin w_k x + a_n \sin w_n x$. Letting $m \to \infty$ shows that $a_n = 0$. Repeat for the other $a_k$.

copper.hat
  • 172,524