2

I'm writing a presentation on modelling fluid flow. We used Runge-Kutta second order to describe the flow as a numerical method.

I just want verify that Runge-Kutta fourth order would be of a higher degree of accuracy - I can't find it anywhere online.

I'm 99% sure but seeing as it counts towards my final mark, I figured I'd ask the intellects of MSE to verify this.

glS
  • 6,818
Doug
  • 77

1 Answers1

6

RK4 in $100$ steps of step size $0.01$ will give an integration over time $T=1$ with an error of magnitude $10^{-8}$ with $400$ evaluations of the system function.

A second order method with the same number of steps will give an error of magnitude $10^{-4}$. With the same number of $400$ function evaluations you get $h=0.005$ and an error of about $2.5·10^{-5}$. To get an error of about $10^{-8}$ would need a step size $h=10^{-4}$ with $20 000$ function evaluations.


Note that there are lower limits for the step size. They depend on the integration time interval and the scale of the system function, but roughly for computations using the double type and and order $p$ method, the step size should never be lower than $10^{-15/(p+1)}$, for RK4 thus $10^{-3}$.


To argue about the general error behavior one does not need the details of the method, only their error order. However, any cookbook for numerical methods has RK4, it is not that complicated.

You only need to know that the error of an order $p$ method is proportional to $M_pTh^p$, where $M_p$ is maximum over the coefficient of $h^p$ in the local discretization error, i.e., the difference of $(y_{k+1}-y_k)/h$ to the same quotient for an exact solution.

That is, $M_p$ stands for the scale of the system function and $T$ for the integration interval. The more exact estimate is $M_p/L·(e^{LT}-1)h^p$ with $L$ the Lipschitz constant of the system.


Practical example

Striving for an error level of $10^{-4}$ over the integration interval $T=1$ demands (rounded) $10$ steps for RK4. Second order Heun replaces each of these $10$ steps with again $10$ steps to reach the same error level. Which replaces $40$ evaluations of the function with $200$. The first order Euler method would need $10\,000$ steps and thus function evaluations, $1000$ per RK4 step. That this drastic difference plays out also in actual computations shows the following table, where additionally the methods other than Euler are also tuned to contain the error level $10^{-7}$. The last column computes the coefficient in the formula error = C*h^p with p the order of the method. That it is about constant shows that p is indeed the order.

Euler:  h=2.50e-02, N=   40;  y[N]=9.15192033, y[N]-y(1)=-1.8008e-01, N^1*(y[N]-y(1))=-7.2032
Euler:  h=5.00e-03, N=  200;  y[N]=9.29508303, y[N]-y(1)=-3.6918e-02, N^1*(y[N]-y(1))=-7.3836
Euler:  h=1.00e-04, N=10000;  y[N]=9.33125831, y[N]-y(1)=-7.4293e-04, N^1*(y[N]-y(1))=-7.4293
----
 Heun:  h=1.00e-02, N=  100;  y[N]=9.33143081, y[N]-y(1)=-5.7042e-04, N^2*(y[N]-y(1))=-5.7042
 Heun:  h=2.00e-03, N=  500;  y[N]=9.33197820, y[N]-y(1)=-2.3033e-05, N^2*(y[N]-y(1))=-5.7582
 Heun:  h=2.00e-04, N= 5000;  y[N]=9.33200100, y[N]-y(1)=-2.3081e-07, N^2*(y[N]-y(1))=-5.7703
----
  RK2:  h=1.00e-02, N=  100;  y[N]=9.33168345, y[N]-y(1)=-3.1778e-04, N^2*(y[N]-y(1))=-3.1778
  RK2:  h=2.00e-03, N=  500;  y[N]=9.33198843, y[N]-y(1)=-1.2806e-05, N^2*(y[N]-y(1))=-3.2016
  RK2:  h=2.00e-04, N= 5000;  y[N]=9.33200111, y[N]-y(1)=-1.2828e-07, N^2*(y[N]-y(1))=-3.2070
----
  RK3:  h=5.00e-02, N=   20;  y[N]=9.33173585, y[N]-y(1)=-2.6538e-04, N^3*(y[N]-y(1))=-2.1231
  RK3:  h=4.00e-03, N=  250;  y[N]=9.33200109, y[N]-y(1)=-1.4362e-07, N^3*(y[N]-y(1))=-2.2441
  RK3:  h=2.00e-04, N= 5000;  y[N]=9.33200123, y[N]-y(1)=-1.8000e-11, N^3*(y[N]-y(1))=-2.2500
----
  RK4:  h=1.25e-01, N=    8;  y[N]=9.33182269, y[N]-y(1)=-1.7854e-04, N^4*(y[N]-y(1))=-0.7313
  RK4:  h=2.00e-02, N=   50;  y[N]=9.33200110, y[N]-y(1)=-1.3406e-07, N^4*(y[N]-y(1))=-0.8379
  RK4:  h=1.00e-03, N= 1000;  y[N]=9.33200123, y[N]-y(1)=-8.3134e-13, N^4*(y[N]-y(1))=-0.8313

These values are for the scalar differential equation given by

from math import sin, cos, exp

def yprime(t,y):
    return y*(2-sin(t));

def y_exact(t):
    return 2*exp(2*t+cos(t)-1)

def y_init(t):
    return y_exact(t);


t0 = 0;
y0 = y_exact(t0);
y1 = y_exact(t0+1);

The steps of the methods used are

def Euler_step(f,t,y,h):
    return y+h*f(t,y);

def Heun_step(f,t,y,h):
    k1=f(t  , y     );
    k2=f(t+h, y+h*k1);
    return y+0.5*h*(k1+k2);

def RK2_step(f,t,y,h):
    k1=f(t      , y         );
    k2=f(t+0.5*h, y+0.5*h*k1);
    return y+h*k2;

def RK3_step(f,t,y,h):
    k1=f(t      , y         );
    k2=f(t+0.5*h, y+0.5*h*k1);
    k3=f(t+    h, y+    h*(2*k2-k1));
    return y + h*(k1+4*k2+k3)/6;

def RK4_step(f,t,y,h):
    k1=f(t      , y         );
    k2=f(t+0.5*h, y+0.5*h*k1);
    k3=f(t+0.5*h, y+0.5*h*k2);
    k4=f(t+    h, y+    h*k3);
    return y + h*(k1+2*k2+2*k3+k4)/6;

The values-and-errors tables are built then with the code

methods = { "Euler":Euler_step, "Heun": Heun_step, "RK2": RK2_step, "RK3":RK3_step, "RK4":RK4_step };

def test_method(name, order, subdivisions):
    stepper = methods[name]
    for N in subdivisions:
        h = 1.0/N;
        y=y0;
        for k in range(N):
            y = stepper(yprime, t0+k*h, y, h)

        print "%5s:  h=%.2e, N=%5u;  y[N]=%.8f, y[N]-y(1)=%.4e, N^%d*(y[N]-y(1))=%.4f" % (name, h, N, y, y-y1, order, N*(y-y1));
    print "----"    

test_method("Euler", 1, [  40, 200, 10000]);
test_method( "Heun", 2, [ 100, 500,  5000]);
test_method(  "RK2", 2, [ 100, 500,  5000]);
test_method(  "RK3", 3, [  20, 250,  5000]);
test_method(  "RK4", 4, [   8,  50,  1000]);
Lutz Lehmann
  • 126,666
  • Great answer, but I'm afraid we really haven't gone in to much depth - we haven't even touched Runge-Kutta 4. I'm assuming that that's why RK4 is used more so than RK2 - because it has a smaller error margin, right? – Doug Apr 17 '15 at 12:56
  • @Doug There will always be a trade-off between computational complexity and accuracy. RK4 is usually accepted as a good tradeoff in general situations, but for specialized situations, there will almost always be better algorithms. – Arthur Apr 17 '15 at 13:04
  • @Arthur , Thanks. Is it generally the case that the higher the computational complexity, the higher the accuracy of the method... So you're saying that RK4 is more accurate, but at the expense of more computational power? – Doug Apr 17 '15 at 13:17
  • That is why I put there the number of function evaluations, because they dominate the computational complexity in terms of run-time. You can have a higher accuracy with less evaluations by switching from improved Euler to RK4. The only thing that is more is memory, you have to hold the four $K_j$ vectors (results of the system function, size of the state vectors) simultaneously in memory.. – Lutz Lehmann Apr 17 '15 at 14:05
  • @Doug There are actually several things going on here. There is the error in taking a step of size $h$ in exact arithmetic; there is the computational cost (memory and processing) of taking a step; there is the number of steps required to get the desired accuracy; and there is the error in taking a step of size $h$ in floating point arithmetic relative to the error in taking a step of size $h$ in exact arithmetic. This last error becomes quite significant for Runge-Kutta type methods of high order, which is a major reason they are not popular. – Ian Apr 17 '15 at 15:16
  • @Ian: Please explain. The error behaves as $T·(Ch^p+D\mu/h)$, which for larger $p$ gives a minimum closer to $1$ at $h\sim\sqrt[p+1]\mu$. However, the minimal value of that error estimate gets closer to $0$ at magnitude $T\mu^{1-1/(p+1)}$. -- For higher order methods it is no longer true that an order $p$ method needs only $p$ function evaluations, and the gain of additional accuracy from, say, $\mu^{1-1/6}$ to $\mu^{1-1/7}$ is minimal, which often does not justify the higher effort. – Lutz Lehmann Apr 17 '15 at 15:35
  • @LutzL I said "a major reason", not "the only reason". The way you've phrased it, you have essentially built my point in by setting up this optimization. In other words, once you optimize over $h$, you find that the minimal truncation error in finite precision arithmetic does not scale nearly as sharply as the order of the method would suggest. This effect manifests from my point about the floating point error effect. The use of methods of intermediate order is justified by this fact combined with your point about number of function evaluations per step per order. – Ian Apr 17 '15 at 15:55
  • @LutzL As an aside, I am fairly sure that the constants you wrote will depend in a significant fashion on the method, though I haven't looked back over the details of this material very recently. – Ian Apr 17 '15 at 15:58