I was wondering about the truth of the following:
$$ \text{if} \lim_{n\to \infty} \| f_n - f \|_{\infty,S}=0 \implies \forall p\in \{ 2,3,4,...\}, \lim_{n\to\infty} \| f_n - f \|_{p,S} = 0$$
or at least in $L^2$ norm.
I think this statement is false. The reason is by re-writing the first part:
$$ \lim_{n\to \infty} \left[ \lim_{p \to \infty} \| f_n - f \|_{p,S} \right] =0 $$
and then reading what it means. It is saying that the as n goes to infinity the limit inside goes to zero only as the limit inside itself has $p$ go to infinity. Thus, my suspicion is that since the limit inside implicitly requires $p$ to grow and grow (for sufficiently large $p$) or "be at infinity already" (since we are talking about the actual limit value). Then means that the outer limit with $n$ is not true with $n$ going to infinity. This is my intuition of why its false, informally because the outer limit is only true when "$p=\infty$".
I bet that a counter example would be the easiest way to prove it. To construct it I tried sort of unraveling what the limit definitions where saying rigorously. I open up the out limit first (i.e. definition using for sufficiently large N...):
$$ \forall \epsilon_n, \exists N': \text{if } n \geq N' \implies | \lim_{p\to \infty} \| f_n - f \|_{p,S} - 0 | < \epsilon_n$$
where $\epsilon_n$ is one symbol denoting the outer limit (i.e. its not a function of $n$, the $n$ denotes its the epsilon for the outer limit).
But then when I tried to open up the second I got confused where the new quantifiers for the inner limit where suppose to go and thus, gave up because I didn't know how to let the statement still be true without writing non-sense. Does someone know how to do this? Maybe this is the wrong way to do it but its what I have...
Anyway, I thought it would be nice to put my question in context too since I believe I am not dealing with arbitrary functions. I was reading the following approximation theory paper in the context of Neural Networks. I am no expert in approximation theory, thus my question. In the paper section 3.1 they have the following sentences:
Notice, however, that from the point of view of machine learning, the relevant norm is the L2 norm. In this sense, several of our results are stronger than needed.
I guess since the quote says:
several of our results are stronger than needed.
I assumed that since the authors have convergence results in the sup norm, then it must mean that it somehow automatically implies its true for the truly relevant norm (i.e. the $L^2$ norm) in their machine learning context. Since a stronger statement $A$ implies weaker statements $B$ (check Why is the definition of stronger statement the way it is in logic?) then it was natural to assume that indeed, the authors where implying that it automatically extended to the $L^2$ norm.
Is that true? Can the results in that paper extend to the relevant loss functions $L^2$ norms in machine learning?
As a summary the context of target functions $f$ and the functions used to approximate (i.e. the one that forms the sequence of functions $f_n$) can be found in section 3.1 and 3.2 I believe. The target class $W^n_{m}$ is a smoothness class with the following property:
$$ \| f \|_{\infty} + \sum_{ 1 \leq | k |_1 \leq m}\| D^{k} f \|_{\infty} \leq 1 $$
i.e. all functions of $n$ variables and $m < \infty $ partial derivatives indicated by the multi-integer $k\geq 1$ and $|k|_1$ is the sum of the components of $k$.
These function are approximated with Neural Networks, I believe with smooth activation functions.
We have $|f - f_n| \le |f - f_n|_{\infty, S}$ almost everywhere so:
$$|f - f_n|{p,S}^p = \int_S |f - f_n|,d\mu \le \int_S |f - f_n|{\infty, S},d\mu = \mu(S)|f - f_n|_{\infty, S} \xrightarrow{n\to\infty} 0$$
We conclude $f_n \xrightarrow{n\to\infty} f$ in $L^p(S)$.
– mechanodroid Dec 10 '17 at 21:30$$|f_n|_{\infty, \mathbb{R}} = \frac{1}{n} \xrightarrow{n\to\infty} 0$$
so $f_n \xrightarrow{n\to\infty} 0$ in $L^\infty(\mathbb{R})$.
However:
$$|f_n|{p, \mathbb{R}}^p = \int{2^n}^{2^{n+1}} \frac{1}n ,d\lambda = \frac{2^n}{n}$$
which does not converge to $0$ so $(f_n)_{n=1}^\infty$ does not converge in $L^p(\mathbb{R})$.
– mechanodroid Dec 10 '17 at 21:30