23

From Wikipedia:

The vector space of (equivalence classes of) measurable functions on $(S, Σ, μ)$ is denoted $L^0(S, Σ, μ)$.

This doesn't seem connected to the definition of $L^p(S, Σ, μ), \forall p \in (0, \infty)$ as being the set of measurable functions $f$ such that $\int_S |f|^p d\mu <\infty$. So I wonder if I miss any connection, and why use the notation $L^0$ if there is no connection?

Thanks and regards!

Bernard
  • 175,478
Tim
  • 47,382
  • 1
    The notation $L^0$ suggests that you are using the "$L^0$ norm." – Qiaochu Yuan Dec 28 '12 at 00:57
  • @Qiaochu:Thanks! How is the $L^0$ norm defined? I didn't find it in the Wikipedia article. – Tim Dec 28 '12 at 00:57
  • 1
    it's defined in the Wikipedia article (not as a norm, which is why I used quotes). The Wikipedia article defines the topology induced by the "norm," which is convergence in measure. – Qiaochu Yuan Dec 28 '12 at 01:04
  • Still not found. – Tim Dec 28 '12 at 01:08
  • 1
    The name $L^0$-norm is used for at least two different things, neither of which is a norm. One of them is $\int \frac{|f|}{1+|f|}$, the other is $\mu{x: f(x)\ne 0}$. Here it's the former. –  Dec 28 '12 at 01:11
  • @PavelM:Thanks! Where did you find the two definitions? By "different", I guess you think they are not equivalent norms? – Tim Dec 28 '12 at 01:13
  • 1
    @Tim: I am not sure how much clearer I can be. It is literally the second sentence in the part of the Wikipedia article you link from: "By definition, it contains all the $L^p$, and is equipped with the topology of convergence in measure." – Qiaochu Yuan Dec 28 '12 at 01:28
  • Well, since they are not norms, they cannot be equivalent norms. However, the first one is always bounded by the second, which I am sure you can prove yourself. In the other direction we have the following example: on $[0,1]$ the functions $f_n(x)=1/n$ have second "norm" (measure of support) equal to $1$ while the first "norm" tends to $0$ as $n\to \infty$. .... "$L^0$ norm" as the size of support is used in compressed sensing. The other one is a standard way to metrize convergence in measure. –  Dec 28 '12 at 01:28
  • @QiaochuYuan: I guess I still miss something here. Does convergence in measure define the $L^0$ norm, and how? – Tim Dec 28 '12 at 01:35
  • 1
    @Tim: yes, except that "norm" is in quotes. Convergence in measure is "the topology that would be defined by the $L^0$ norm if there were such a thing," or something like that. – Qiaochu Yuan Dec 28 '12 at 01:51
  • @PavelM: Thanks! (1) What are the names for the two "norms" and/or their induced metrics? Is $\mu(f \neq 0)$ called the size of support of $f$? What is the name for the other one? (2) Are they some generalized types of norms? Are the "metrics" induced from them metrics or some generalized metrics? – Tim Dec 28 '12 at 23:07
  • 1
    (1) I don't think they have established names. I used "size" informally, it's more precise to call $\mu{f\ne 0}$ the measure of support of $f$. This is sufficiently descriptive and descriptive. I don't know of a good name for the other. Note that the choice of integrand $|f|/(1+|f|)$ is pretty arbitrary: one could use $\min(|f|,1)$ or $\tan^{-1}|f|$, or lots of other bounded functions for the same purpose.... They are both quasinorms, you can take $K=2$ in the definition. –  Dec 28 '12 at 23:18
  • 1
    (2) They both induce legitimate metrics. The proof of triangle inequality $d(f,h)\le d(f,g)+d(g,h)$ is easier for the second, because ${f\ne h}\subset {f\ne g}\cup {g\ne h}$. For the first one it's a little longer: one has to show that the function $\psi(t)=t/(1+t)$ is subadditive: $\psi(t+s)\le \psi(t)+\psi(s)$ for all $t,s\ge 0$. It then follows that $\psi(|f-h|)\le \psi(|f-g|)+\psi(|g-h|)$, and integration gives the triangle inequality. –  Dec 28 '12 at 23:22
  • Thanks, @PavelM! Do you have some references on the two "$Ll^0$ norms" among others? I am curious where people learn these things from. – Tim Dec 28 '12 at 23:30
  • @Tim One of my comments above has a reference to Wikipedia article on compressed sensing: this is where the "measure of support" is used all the time. The other ones are found in textbooks on real analysis, specifically in a section that discusses convergence in measure. E.g., Folland's Real Analysis, or Royden's book by the same title, etc. –  Dec 28 '12 at 23:36
  • @PavelM: Thanks! In an earlier comment, " the first one is always bounded by the second, which I am sure you can prove yourself. In the other direction we have the following example: on [0,1] the functions $f_n(x)=1/n$ have second "norm" (measure of support) equal to 1 while the first "norm" tends to 0 as n→∞". I think the example "in the other direction" is still for the first direction: "the first one is always bounded by the second"? – Tim Dec 29 '12 at 05:20
  • Let's see: "first norm" of $f_n$ is $\int_0^1 \frac{1/n}{1+1/n} dx = \frac{1}{n+1}$. "Second norm" (measure of support) is exactly $1$. So, this example demonstrates than one cannot have a bound of the form "second norm"$\le $constant$*$"first norm". Of course, this example conforms to the statement "first norm"$\le$"second norm", but that does not tell us much: we can't prove a general statement by showing an example for which it is true. –  Dec 29 '12 at 05:28
  • To prove that "first norm"$\le$"second norm", one argues as follows: for any value $f(x)$ we have $|f(x)|\le |f(x)|+1$, hence $\frac{|f(x)|}{|f(x)|+1}\le 1$. Integrating this inequality over the set ${f\ne 0}$, we obtain that "first norm" $=\int_{{f\ne 0}} \frac{|f(x)|}{|f(x)|+1},d\mu \le \int_{{f\ne 0}} 1,d\mu =\mu {f\ne 0}$, as claimed. –  Dec 29 '12 at 05:31
  • Thanks, @PavelM! "that the choice of integrand $|f|/(1+|f|)$ is pretty arbitrary: one could use $\min(|f|,1)$ or $tan^{-1} |f|$, or lots of other bounded functions for the same purpose.... They are both quasinorms". I was wondering by the choices being arbitrary for the integrand, you are saying they all lead to convergence in measure? Although arbitrary, I guess not any function can be the integrand, isn't it? I searched in Folland's and Royden's books, but didn't see other choices of integrands except $|f|/(1+|f|)$. – Tim Dec 29 '12 at 22:11
  • Yes, I meant that those functions give other metrics for which the notion off convergence coincides with convergence in measure. What is needed here: be equal to 0 at 0, increasing, bounded, and subadditive. Any such function could be used in place of t/(1+t). Since t/(1+t) does the job and is simple enough, it gets used in the books. –  Dec 29 '12 at 22:34
  • @PavelM: Thanks! Sorry for asking: why "What is needed here: be equal to 0 at 0, increasing, bounded, and subadditive"? is it mentioned in some references? – Tim Dec 29 '12 at 22:45
  • @Tim No, there is no particular reason to mention things like that in a book. I came up with these properties thinking of what is needed to show that (a) $\int \psi(|f-g|)$ is a metric; (b) convergence in this metric is equivalent to convergence is measure. –  Dec 29 '12 at 22:55

4 Answers4

24

Note that when we restrict ourselves to the probability measures, then this terminology makes sense: $L^p$ is the space of those (equivalence classes of) measurable functions $f$ satisfying $$\int |f|^p<\infty.$$ Therefore $L^0$ should be the space of those (equivalence classes of) measurable functions $f$ satisfying $$\int |f|^0=\int 1=1<\infty,$$ that is the space of all (equivalence classes of) measurable functions $f$. And it is indeed the case.

Godot
  • 2,082
  • Can you also account for the choice of topology using such heuristics? (sorry for this word, I can't think of a better one) – Martin Dec 28 '12 at 02:46
  • @Martin I can't figure out a totally convincing reason for such a choice of topology but IMHO it has something to do with the fact, that the convergence in the $p$-th norm implies convergence in probability. For $p<1$ we dont have a norm but we still got a metric. I think ( but please check it!) that the topology of the convergence in probability might be the largest topology smaller than all $L^p$ topologies for $p>0$. Take it with a grain of salt and check if it makes sense, it is just my intuition and it does sound good :) – Godot Dec 28 '12 at 03:10
  • Sounds especially good because the $L^p$ spaces are nested if we consider probability measures. – Godot Dec 28 '12 at 03:12
  • Correction: I wrote "largest topology smaller", but it should be "smallest topology larger" – Godot Dec 28 '12 at 03:37
7

If the measure of $S$ is finite, the $L^p$ spaces are nested: $L^{p}\subset L^q$ whenever $p\ge q$. The smaller the exponent, the larger the space. Since the space of measurable functions contains all of the $L^p$ for $p>0$, one may be tempted to denote it by $L^0$.

This temptation should be resisted and the notation $L^0$ banished from usage. [/rant]

7

I do think that $L^0$ is nice usage. As is well-known $\lim_{p \to \infty} \|\cdot \|_{L^p} = \|\cdot\|_\infty$ for certain spaces or functions. The case for $L^0$ is not that pretty, but at least still nice.

Recall the distribution function $\mu$ of $f$ given by, $$\mu(\alpha) := \mu_f(\alpha) := \mu\{|f|>\alpha\}.$$

Fubini gives that, $$\|f\|_{L^p}^p = p \int_0^\infty \mu_f(\alpha) \alpha^p \frac{\mathrm{d}\alpha}{\alpha}.$$

We can define the Lorentz spaces in a similar way. And indeed, for a finite measure space, we have if $p < q$ that $$L^q \subseteq L^p.$$ Hence, it is natural to define $L^0$ as, $$L^0 = \bigcup_{p > 0} L^p.$$ We would like to have that $L^0$ is also complete as a metric space, otherwise the notation would be quite deceiving indeed. For this we need a notation of convergence. On $L^p$ for $0 < p < 1$ it is not the norm that induces the metric, but it is $\|\cdot\|_p^p$.

So, for $0 < p < 1$ we have, $$d_p(f, g) = p \int_0^\infty \mu\{|f - g|>\alpha\} \alpha^p \frac{\mathrm{d}\alpha}{\alpha}.$$

$\varepsilon$-neighborhoods $N^p_\varepsilon$ of $f$ in $L^p$ are then given by $$N^p_\varepsilon(f) = \Biggl\{g : p \int_0^\infty \mu\{|f - g|>\alpha\} \alpha^p \frac{\mathrm{d}\alpha}{\alpha} < \varepsilon \Biggr\}.$$

Too be continued, I wanted to give a brief remark, but I have decided otherwise in due progress.

JT_NL
  • 14,514
  • But $L^0 \supsetneqq \bigcup_{p \gt 0} L^p$. The right hand side is dense, but not everything. Partition $[0,1]$ into countably many intervals. On the $n$th piece take a function which is in $L^{\frac1n}$ but not in $L^p$ for $p \gt \frac1n$. You'll get a measurable function which is not in any $L^p$ for $p \gt 0$. – Martin Dec 29 '12 at 08:30
  • 1
    @Martin: True, I'll modify that. You want to end up with convergence in probability. – JT_NL Dec 29 '12 at 08:45
  • Agreed. I'm confident that something along the lines suggested by Godot in the comments to his answer works, and that it should look quite close what you're doing. I guess the abstract point is that when taking the direct limit of complete metric spaces you need to take the completion to end up with a complete space. – Martin Dec 29 '12 at 09:05
  • Yes, indeed. But, needs a bit of work as you need to mention the topology first. 8-). – JT_NL Dec 29 '12 at 10:32
  • @JonasTeuwen: I know this post was made some time ago, but do you have any thoughts on how to arrive at the $L^{0}$ topology of convergence in measure as a "limit" of the $L^{p}$ topologies? – Matt Rosenzweig Jun 30 '14 at 18:26
3

$L^0$ is just a notation to refer to the weakness of the topology of convergence in measure. It is not locally bounded but is metrizable if the underlying measure space is non-atomic and $\sigma$-finite. The proper terminology is F-norm for complete metric linear spaces(F-spaces) which are not locally bounded. When locally bounded, such spaces are quasi-Banach spaces which $L^0$ is not. Thus, the main difference between a norm(quasi-norm) and an F-norm is the homogeneity. For quasi-norms $||cx||\le |c|||x||$ for all scalars c but for F-spaces $||cx||\le ||x||$ only for $|c|\le 1$. Despite being almost 40 years old, "An F-space Sample" by Kalton, Peck, and Roberts is still one of the best sources for this general view of functional analysis which focusses on topology and not on operator theory for Hilbert spaces.