4

Suppose $V\subset C[0,1]$ is a linear space and $\dim V=N$, and there exists an element $u\in V$, such that $\int_0^1 u \neq 0$. Suppose $\Omega \subset [0,1]$is a closed set, and if $f\in V$, $\sup_{\Omega}f\leq 0$,then $\int_0^1f\leq 0$. Prove that there exists $x_1,\cdots,x_N\in \Omega$,and $\lambda_1,\cdots,\lambda_N\geq 0$, so that \begin{align*} \int_0^1g=\sum_{j=1}^N\lambda_jg(x_j ),\forall g\in V \end{align*}

Below is what I can thought. Try to solve this problem by induction on $\dim V$.

When $N=1$, $V=L(f_1)$. Then $\int_0^1f_1\neq 0$. Without loss of generality, we can suppose $\int_0^1f_1>0$. Then by the property of $\Omega$ we have $$\sup_{\Omega}f_1>0$$

So there exists $x_1\in \Omega$, such that $f_1(x_1)>0$. We take $\lambda_1=\frac{\int_0^1f_1}{f_1(x_1)}$, then $\lambda_1>0$, and $$\int_0^1 f_1 =\lambda_1f_1(x_1)$$ So the conclusion holds for $N=1$.

Jose Avilez
  • 12,710
mbfkk
  • 1,299

1 Answers1

1

This is not a full answer since I talk only about the case $\Omega = [0,1]$, but this question is linked to generalizations of the mean value theorem to vector-valued functions.

Observe that the hypothesis : $f \in V, \sup_{\Omega} f \leq 0 \Rightarrow \int_0^1 f \leq 0$ is equivalent to $g \in V \Rightarrow \int_0^1 g \leq \sup_{\Omega} g$.

The $\Leftarrow$ direction is straightforward. For the $\Rightarrow$ direction, you apply the hypothesis on $f(x) = g(x) - \sup_{\Omega} g$.

Taking $-g$ instead of $g$, we also find that $g \in V \Rightarrow \int_0^1 g \geq \inf_{\Omega} g$, i.e.

$$g \in V \Rightarrow \inf_{\Omega} g \leq \int_0^1 g \leq \sup_{\Omega} g$$

In the case $\Omega = [0,1]$ (which is closed), then this is just the standard lower and upper bounds for a Riemann integral and we get nothing substantial from the hypothesis.

So let's assume that $\Omega = [0,1]$ and $V$ is a $N$-dimensional subspace of $C[0,1]$ with basis $\{u_1,...,u_N\}$.

Define the vector-valued function $F : [0,1] \rightarrow \mathbb R^N$ by $$F(x) = \begin{pmatrix}u_1(x) \\ \vdots \\ u_N(x) \end{pmatrix}$$

A generalization of the mean value theorem given in "Mean value theorems for vector valued functions" by Robert M. McLeod implies that there exists $x_1,...,x_N \in (0,1)$, $\lambda_1,...,\lambda_N \geq 0$ such that $\sum_{k=1}^N \lambda_k = 1$ and $$\int_0^1 F = \sum_{k=1}^N \lambda_k F(x_k)$$ where the integral $\int_0^1 F$ is performed component-wise.

For an arbitrary $g \in V$, we can write $$g = \alpha \cdot F(x)$$ for some $\alpha \in \mathbb R^N$, where $\cdot$ denotes the usual dot product.

Hence,

$$\int_0^1 g = \int_0^1 \alpha \cdot F = \alpha \cdot \int_0^1 F = \alpha \cdot \sum_{k=1}^N \lambda_k F(x_k) = \sum_{k=1}^N \lambda_k \alpha \cdot F(x_k) = \sum_{k=1}^N \lambda_k g(x_k)$$

Desura
  • 2,011