3

Something that has always bothered me about numerical solutions to ODE is that it isn't clear to me how well the discretization scheme can be assumed to really approximate a local derivative.

The most basic Euler discretization gives:

$$ x_{n+1} \approx h*f(t_n,x_n)+x_n,$$ $$ t_{n+1} = t_n + h$$

In the even simpler autonomous case you get:

$$ x_{n+1} \approx h*f(x_n)+x_n,$$

But it's not clear to me how well the first approximation actually holds, in particular it seems there needs to be some global bound on $f$ in order to claim any type of error bound. This is because the ODE could possibly wildly oscillate or explode. Could someone explain to my the intuition behind why these schemes work for most problems that are seen in applied mathematics?

maxical
  • 571
  • There is some ambiguity in your notation. It seems like you are using $x_{n}$ to mean a solution of the ODE evaluated at $t_{n}$. However, you are also referring to $x_{n+1}=x_{n}+hf(t_{n},x_{n})$ as the Forward Euler scheme. It's helpful to distinguish between the scheme and the solution. In the answer posted below, I distinguish between a solution of the ODE $x$ and the Forward Euler scheme denoted $\mathbf{x}$. – parsiad Sep 13 '20 at 04:50
  • 1
    For reference, there are previous topics on the global convergence or global error of the Euler method (and other Runge-Kutta methods) on this site like https://math.stackexchange.com/questions/2365839/eulers-method-global-error, https://math.stackexchange.com/questions/1921554/local-vs-global-truncation-error, or ... (I thought I remembered more). To visualize the convergence (or at least that the error behaves in an orderly fashion for not too small step sizes) see https://math.stackexchange.com/questions/3058387/empirical-error-proof-runge-kutta – Lutz Lehmann Sep 13 '20 at 09:51

1 Answers1

5

You are right to suspect that some conditions on $f$ are required for convergence. For ease of exposition, I first give a proof assuming rather strong conditions on $f$ and then subsequently relax those conditions.


Let $x$ be a function satisfying $x^{\prime}(t)=f(t,x(t))$ on the real line. Assuming $x$ is twice continuously differentiable, Taylor's theorem yields $$ x(t+h)=x(t)+hf(t,x(t))+O(h^{2}) $$ for any $h>0$. Next, consider the unit interval and subdivide it into $N$ pieces of length $h=1/N$, denoting the boundaries by $t_{n}=nh$. The Forward Euler scheme is defined by $\mathbf{x}_{0}=x(0)$ and iterates $$ \mathbf{x}_{n+1}\equiv\mathbf{x}_{n}+hf(t_{n},\mathbf{x}_{n}). $$ Note, in particular, that $x$ and $\mathbf{x}$ refer to different quantities: the former satisfies an ODE while the latter is the solution of the Forward Euler scheme. Ultimately, we would like to compare the difference between these two quantities. Denote this error $\mathbf{e}_{n}\equiv\mathbf{x}_{n}-x(t_{n})$. Then, $$ \mathbf{e}_{n+1}=\mathbf{e}_{n}+hf(t_{n},\mathbf{x}_{n})-hf(t_{n},x(t_{n}))+O(h^{2}). $$ Assuming $f$ is bounded independently of $t$ and $x$, the above implies $\mathbf{e}_{n+1}=\mathbf{e}_{n}+O(h)$ and hence $\mathbf{e}_{n}=nO(h)$ by induction. Since $n\leq N=1/h$, $\mathbf{e}_{n}=O(1)$. That is, the error is bounded independently of $n$ (we will tighten this bound below). Assuming the first spatial derivative $f_{x}$ of $f$ exists and is bounded independently of $t$ and $x$, another application of Taylor's theorem yields $$ f(t_{n},\mathbf{x}_{n})=f(t_{n},x(t_{n})+\mathbf{e}_{n})=f(t_{n},x(t_{n}))+O(1). $$ Combining this with the recurrence for the error, $$ \mathbf{e}_{n+1}=\mathbf{e}_{n}\left(1+O(h)\right)+O(h^{2}). $$ By induction, $\mathbf{e}_{n}=nO(h^{2})$. Since $n\leq N=1/h$, we obtain the tighter bound $\mathbf{e}_{n}=O(h)$. That is, the error is linear in the size of the discretization.


The boundedness of $f$ and $f_{x}$ are stronger requirements than necessary. Indeed, assume $f$ is locally Lipschitz in space, uniformly in time. Let $A=[-a,a]$ be an interval with $a>0$. By restricting the spatial argument $x$ of $f$ to the interval $A$, we obtain a function that is (globally) Lipschitz in space, uniformly in time. In particular, for $x$ in $A$, $$ \left|f_{x}(t,x)\right|\leq\lim_{h}\frac{Lh}{h}=L. $$ and $$ \left|f(t,x)\right|\leq\left|f(t,0)\right|+L\left|x\right|. $$ If, in addition, $\sup_{t \in [0,1]}|f(t,0)|<\infty$, the above inequalities imply that $f$ and $f_{x}$ are bounded on $[0,1] \times A$. This is sufficient for our purposes so long as the scheme is stable (that is, $\max_{n}|\mathbf{x}_{n}|$ is bounded independently of $h$). In this case, the proof given in the first part of this post still works since both $x$ restricted to $[0, 1]$ and $\mathbf{x}$ are bounded (recall that we assumed the continuity of $x$ at the beginning of this post).

parsiad
  • 25,154
  • A problem is that many simple ODE equation does not have a bounded $$f$$, which makes it very confusing. For example a simple system $$x' = x^2+3x$$. But I see numerical solutions to these kinds of ode all the time. – maxical Sep 13 '20 at 05:23
  • 1
    @FourierFlux: The conditions I gave are stronger than necessary. I added a remark (see above). – parsiad Sep 13 '20 at 07:07
  • 2
    You missed the usually used weaker regularity condition on $f$, that it be (locally) Lipschitz in $x$. You have that already weakly included in the local bound on $f_x$, but could make it explicit. – Lutz Lehmann Sep 13 '20 at 09:17
  • 1
    @FourierFlux : The interest is for the convergence to the exact solution where the exact solution exists. Over such a (closed) time segment, the solution is automatically bounded, $|x|<M$. Then the $x$-derivative is also bounded, here by $L=2M+3$, and the global convergence machine works as described. Note that you want $L|h|\ll 1$ to get any meaningful numerical values (which practically means $|Lh|<0.1$ or similar), with increasing $M$ in the example the step size needs to become correspondingly smaller. – Lutz Lehmann Sep 13 '20 at 09:26
  • @LutzLehmann: Good catch. I switched to the Lipschitz condition. – parsiad Sep 13 '20 at 22:12
  • How do we know beforehand that the solution $x$ will stay in the interval? I am guessing this is related to the picard existence theorem? – maxical Sep 14 '20 at 19:16
  • @FourierFlux: I assumed the continuity of $x$ on $[0,1]$ at the beginning of the post. The image of a continuous function under a compact set is itself compact.

    You can imagine, however, a case where $x^\prime = f$ is satisfied only on $(0, 1)$ and, in particular, $x(t)$ blows up as $t \rightarrow 1$. Then, this analysis would fail. The key point is, as Lutz Lehmann says above, "the interest is for the convergence to the exact solution where the exact solution exists."

    – parsiad Sep 14 '20 at 23:56
  • hmm, $x$ can be continuous but still exit the above interval though, how do you know the bounds of $A$ without first having an analytic expression of the solution? Seems circular. – maxical Sep 15 '20 at 04:21
  • 1
    @FourierFlux: It's not circular: the argument assumes existence of a solution.

    Also, you're conflating analytical solution with existence. An analytical/closed form solution refers to something you can write down in terms of "nice" functions. Note that "nice" here is subjective. Take for example Bessel's differential equation. Besides, if all differential equations had analytical solutions, we would not need numerical methods.

    – parsiad Sep 15 '20 at 04:35
  • Let me rephrase, suppose we know a solution exists(using picard theorem), how can deduce a box which we know the solution will stay within in order to apply the above theory? Because unless we have a way of finding a box which we know the solution stays within( or part of the solution), the above doesn't really matter. – maxical Sep 15 '20 at 05:00
  • @FourierFlux: If you know a (continuous) solution $u$ exists in $[-\epsilon, \epsilon]$, then within that interval it takes values in $[-a, a]$ where $$a = \max_{|t| \leq \epsilon} |u(t)|.$$ The maximum is finite because of continuity. – parsiad Sep 15 '20 at 05:14
  • Yes! But if you don't have an analytic solution you don't have a way to find $a$ directly using the above. – maxical Sep 15 '20 at 05:26
  • @FourierFlux: You don't need to compute something to know it exists. For example, I know that polynomials of all orders have roots, but for high enough orders, I can't always write down those roots in terms of nice functions.

    I recommend learning real analysis (I don't mean this disparagingly; it will help) and coming back to read this answer at a later date. Principles of Mathematical Analysis is a good standard undergraduate text.

    – parsiad Sep 15 '20 at 05:30
  • You're not understanding, to actually declare a meaningful error bound when doing numerical approximation you need to have a way to actually compute or approximate the values. – maxical Sep 15 '20 at 05:59
  • @FourierFlux: are you suggesting $\mathbf{e}_n = O(h)$ is a meaningless error bound? You can get a more accurate form for the $O(h)$ term by not dropping terms in the Taylor series.

    Regardless, if you're looking for a number (which it sounds to me like you are, as your previous comments suggest you are seeking numerical quantities) you're not going to find one in the case of general $f$ as above. If you fix your attention to a particular $f$, you'll be able to get a numerical bound for each choice of $N$.

    – parsiad Sep 15 '20 at 06:15
  • Hmm, to make this more concrete, suppose we want to analyze the ODE in the first comment with initial conditions of 0 at t=0, we want to numerically computed the solution using euler's method. How can we get a bound? – maxical Sep 15 '20 at 07:51
  • @FourierFlux: That question is sufficiently long/different from your original question that you should make a new post. Feel free to link it here once you do. – parsiad Sep 15 '20 at 09:06