3

Consider the function $y(x)$ satisfying $y(0) = 1$ and $y'(x) = x^2 + y(x)^2$ for every $x$ in a maximal interval $(-a,a)$, for some $a \in (0,\infty]$. The function is nonelementary and very complicated, as a quick Wolfram Alpha query can attest. But if you drop the $x^2$ term from the IVP, then the solution is much easier. That is, the function $y_*(x)$ such that $y_*(0) = 1$ and $y_*'(x) = y_*(x)^2$ for each $x \in \operatorname{dom} y_*$. In this case, solving the separable equation yields $y_*(x) = (1-x)^{-1}$ defined on $\operatorname{dom} y_* = (\infty, 1)$.

It is not difficult to convince yourself that $y_*(x) \leq y(x)$ for each $x \in [0,a)$ (and therefore $y(1)$ is undefined and $a \leq 1$). Intuitively, it is because $x^2 + y^2 \geq y^2$ for each $(x,y) \in \mathbb R^2$. As the former determines the growth of $y$ and the latter of $y_*$, it must be the case that $y_*$ grows not faster than $y$, hence $y_*(x) \leq y(x)$.

What I wonder about is how to formally prove this inequality - and, in particular, how to make the intuitive argument above into a formal one. The catch is that you cannot directly compare the IVPs of the two functions, as the first is really $x^2 + y(x)^2$ and the second is $y_*(x)^2$. To compare them directly, you would need the bound $y_* \leq y$ to already be true, but that is what we are trying to show.

Imagine a different problem, where the functions $y$ and $y_*$ are replaced by sequences, and the derivative replaced by the forward difference. That is, imagine $y$ is the sequence given by $y(0) = 1$ and $(\Delta y)(n) = n^2 + y(n)^2$, and similarly for $y_*$. Then you can easily prove $y(n) \geq y_*(n)$ holds for every $n$ by induction.

Is it possible, then, to use something like "real induction" or "continuous induction" to prove that $y(x) \geq y_*(x)$ for every $x$? I did some digging and found one theorem which seems to be close to what I want, but I stil could not figure it out.

Suppose $S \subset \mathbb R$ is a set of real numbers satisfying the following:

  1. $a \in S$,
  2. $S$ is extensible, in the sense that $\forall x \in S\ \exists y > x\ ([x,y] \subset S)$, and
  3. $S$ is upward-closed, in the sense that if $(x_n)_n$ is any increasing, bounded sequence in $S$, then $\lim_n x_n \in S$.

Then $[a,\infty) \subset S$.

One problem with applying this theorem is that I would like to apply it to the domain of $y(x)$, which is (purportedly) a bounded interval, not an infinite one. Even so, I cannot even find a way to show that $\{x : y_*(x) \leq y(x)\}$ is extensible. Seemingly, you want to show that for each $x$ such that $y_*(x) \leq y(x)$, there exists a $\delta > 0$ such that $y_*(x + h) \leq y(x+h)$ for every $h \in (0,\delta)$. Attempting to unpack the definition of the derivative, I have shown something which is almost this, but falls short.

That is, so far I have shown that for every $\varepsilon > 0$ and $x$ such that $y_*(x) \leq y(x)$, there is a $\delta > 0$ such that $0 < h < \delta$ implies $y_*(x+h) \leq y(x+h) + 2\varepsilon h$. But you cannot eliminate the $\varepsilon$ from the statement by casting it to zero, without also being forced to cast $h$ to zero also.

Rob
  • 6,727

3 Answers3

2

Continuous induction was discussed recently. I proposed a definition of induction that fits the discrete and continuous cases as well.

For $\mathbb{R}$, the induction of a property $P(x)$ means we have to prove:

  • (0) The property $P$ is true on some $x_0$.
  • (1) $\forall a,b$, if $P$ is true on $[a, b)$ then it is true on $b$.
  • (2) $\forall a,b$, if $P$ is true on $[a, b]$ then $\exists c>b$, $P$ is true on $[b,c]$.

From that, we conclude that $P$ is true on $[x_0,+\infty)$. Of course the propagation can also be reduced to some interval $[x_0, x_1]$ or $[x_0, x_1)$, if what has to be proven for propagation does not work after $x_1$.

Here is a tentative application to the present case, which fails. I post it for possible improvement later (by me or someone else), and for comparison with the problem you encounter in your own tentative.

So, to apply this to our case, we define
$z(x)=y(x)-y_*(x)$.
$z'(x)=x^2+y(x)^2-y_*(x)^2$
$z'(x)=x^2+(y(x)+y_*(x))z(x)$

Our induction hypothesis will be: $y(x) \ge 1, y_*(x) \ge 1, z(x)\ge 0$.

(0) It is true on $x=0$.

(1) If it is true on $[a,b)$, then it is also true on $b$, the three functions being continuous.

(2) If it is true on $[a,b]$, then $y'(x), y_*'(x)$ are all $>0$ on $b$, so the 2 functions are locally growing, so they stay $\ge 1$ on some interval after $b$.
And this is where it fails: we cannot do similarly for $z(x)$, because the case $b=0$ has to be dealt with, and at this point we cannot conclude $z'(0) > 0$, so we cannot get a non-null interval where $z(x)>0$.

It seems the difficulty has less to do with the method (continuous induction), but more with the recurrence hypothesis: we do not have an hypothesis that is both valid on $0$, and sufficiently strong to allow propagation.

Another option would be to refine specifically for differential equations the point (2) presented above (i.e. prove that $\forall a,b$, if $P$ is true on $[a, b]$ then $\exists c>b$, $P$ is true on $[b,c]$): currently, this point requires making a discrete step from $b$ to some $c$, it does not take profit of the differential.

  • Hmm, this is very interesting. I agree that the hardest part seems to be going from zero to anywhere else; as you observed, as soon as one has the hypothesis valid for an $x_0 > 0$, then the hypothesis is true for all $x \geq x_0$ (in the domain of $y$). – Rob Aug 26 '22 at 18:34
0

Okay, I have found a proof(?) of the inequality $y_*(x) \leq y(x)$, though my argument does not use real induction and I am still curious if real induction can be used.

This proof is based on the observation that the inequality is easy to prove for sequences, and the idea of using a numerical method (e.g., Euler's method) to approximate both $y$ and $y_*$.

Specifically, let $h > 0$ be fixed and let $y^h$ be the function defined on $\{0,h,2h,\ldots\}$ given by $y^h(0) = 1$ and $(\Delta^h y^h)(nh) = (nh)^2 + (y^h(nh))^2$ for all $n$. Similarly, define $y_*^h$ by $y_*^h(0) = 1$ and $(\Delta^h y_*^h)(nh) = (y_*^h(nh))^2$. Here we define $(\Delta^h f)(x) = \frac1h(f(x+h) - f(x))$.

Then, it is easy to show by induction that $y_*^h(nh) \leq y^h(nh)$ for every $n$.

Now fix $x$, let $N$ be arbitrarily large and let $h = x/N$. By Euler's method, we have $y^h(Nh) \to y(x)$ as $N\to \infty$. Similarly, $y_*^h(Nh) \to y_*(x)$. As $y_*^h(Nh) \leq y^h(Nh)$ for every $N$, it follows that $y_*(x) \leq y(x)$.

This argument seems clunky to me; surely there is a more elegant proof of the associated general problem:

Let $f(x,y)$ and $g(x,y)$ be smooth, nonnegative functions with $g(x,y) \leq f(x,y)$ for every $(x,y) \in [0,\infty)^2$. Suppose $y$ and $y_*$ are smooth functions such that $y(0) = y_*(0)$ and $y'(x) = f(x,y(x))$ and $y_*'(x) = g(x,y_*(x))$ for every $x$. Then, $y_*(x) \leq y(x)$ for every $x$.

I am curious to see a more elegant proof than the above idea, and I am especially curious to see if the idea of real induction can be applied.

Rob
  • 6,727
0

Some more thoughts on this problem, but too long for a comment:

First, consider $S = \{x : y_*(x) \leq y(x)\}$. We know in advance that $\operatorname{dom} y$ is bounded, so $S$ cannot be simultaneously extensible and upward closed, otherwise $S$ would be unbounded. A bounded open interval is extensible, therefore it is the "upward closed" property which must be weakened/replaced.

Here, then, is a different formulation of real induction:

Let $a \in \mathbb R$ and let $b \in (a,\infty]$. If $S \subset \mathbb R$ satisfies the following three properties, then $[a,b) \subset S$.

  1. $a \in S$

  2. $S$ is extensible, in the sense that $\forall x \in S\ \exists y > x \ \big([x,y] \subset S\big)$.

  3. $S$ is upward closed in $[a,b)$ or relative to $[a,b)$, in the sense that if $(x_n)_n$ is any increasing bounded sequence in $S$ with limit $x_\infty < b$, then $x_\infty \in S$.

The difference is subtle: you are allowed to take arbitrary increasing sequences in $S$, so long as the limit is strictly less than $b$. Or, you are allowed to take arbitrary increasing sequences in $S$ which are bounded away from $b$. When $b = \infty$, this is the same as being upward closed in the original sense.


Additional note: the $b$-upward closed property is easy to establish for our $S$ (here $b = \sup \operatorname{dom} y$; we already know $\operatorname{dom} y$ is an interval by the existence-uniqueness theorem).

Indeed, if $(x_n)_n$ is an increasing sequence with limit $x_\infty \in \operatorname{dom} y$ such that $y_*(x_n) \leq y(x_n)$ for each $n$, then $y_*(x_\infty) \leq y(x_\infty)$ by continuity of $y$ and $y_*$. This establishes that $S$ is $b$-upward closed. Therefore, the hard part of the problem is to establish that $S$ is extensible.

Rob
  • 6,727