3

I was having trouble with the following question from Luenberg's Optimization by Vector Space Methods:

2.10 A normed space is said to be strictly normed if $\|x + y\| = \|x\| + \|y\|$ implies that $y = \theta$ or $x = \alpha y$ for some $\alpha$
a) Show that $L_p[0,1]$ is strictly normed for $1 < p < \infty$
b) Show that $X$ if $X$ is strictly normed, the solution to 2.9 (below) is unique.

Problem 2.9 (attempted here, any corrections/suggestions appreciated) is:

2.9: Let $X$ be a normed linear space and let $x_1, x_2, \ldots, x_n$ be linearly independent vectors from $X$. For fixed $y\in X$, show that there are coefficients $a_1, a_2, \ldots, a_n$ minimizing $\|y - a_1 x_1 - a_2 x_2 - \ldots - a_n x_n\|$

Question 1: could you provide feedback, if any, on my solution to 2.9?

Both 2.9 and 2.10 are introduced before we've learned Lebesgue integration or about measure or measurable functions, so there should be a solution to 2.10 that doesn't require this knowledge.

Update 1: The solution to problem 2.10a is available here. I had a question about it which I've included as a comment to the first response, if anyone is able to answer it.

Update 2: I was able to find a solution to 2.10b here, which I modified to fit the problem in the answer below.

akm
  • 374

1 Answers1

1

2.10 A normed space is said to be strictly normed if $\|x + y\| = \|x\| + \|y\|$ implies that $y = \theta$ or $x = \alpha y$ for some $\alpha$.

a) Show that $L_p[0,1]$ is strictly normed for $1 < p < \infty$

The solution is available here.

b) Show that if $X$ is strictly normed, the solution to 2.9 is unique.

Here's the best solution I've found:

Let $X$ be a strictly normed space; $y$ an element of $X$; and $\mathcal{U}\subseteq X$ the subspace generated by $x_1, x_2, \ldots, x_n \in X$. The linear approximation from $\mathcal{U}$ to $y$ using coefficients $a = (a_1, a_2, \ldots, a_n)$ is written as $u_a$. Suppose the best linear approximations of $y$ are $u_\beta$ and $u_\nu$ where $\beta \neq \nu$ and $\|y - u_\beta\| = \|y - u_\nu\| = \lambda > 0$.

If $y\in \mathcal{U}$ then $u_\beta = u_\nu = y$, and there would be exactly one linear combination of $x_1, x_2, \ldots, x_n$ equal to $y$ and $\beta = \nu$. Since $\beta \neq \nu$, the vector $y\not\in \mathcal{U}$, meaning that neither $y - u_\mu$ nor $y - u_\beta$ equal $\theta$. Additionally, since $y\not\in \mathcal{U}$, there exists no $\alpha > 0$ such that $y = \frac{1}{1-\alpha}u_\nu - \frac{\alpha}{1-\alpha}u_\beta$ and so $y - u_\nu \neq \alpha(y - u_\beta)$ for any $\alpha$. Since $X$ is strictly normed, by the contrapositive of the definition, Minkowski's inequality is strict. So

\begin{align} \Big\|y - \frac{1}{2}(u_\nu + u_\beta)\Big\| &= \Big\|\frac{1}{2}(y - u_\nu) + \frac{1}{2}(y - u_\beta))\Big\| \\ &< \Big\|\frac{1}{2}(y - u_\nu)\Big\| + \Big\|\frac{1}{2}(y - u_\beta)\Big\| \\ &= \frac{1}{2}\|y - u_\nu\| + \frac{1}{2}\|y - u_\beta\| = \lambda \end{align} meaning that $u_{(\nu + \beta)/2}$ provides a better approximation of $y$ than $u_\beta$ or $u_\nu$, contradicting the assertion that those were the best approximations. So, $\beta = \nu$.

akm
  • 374