2

Dottie number is root of this equation : $cos \alpha = \alpha$, $\alpha \approx 0.73908513321516064165531208767\dots$.

I wonder how can I compute it ? I have tried to do it with an approximating formula:

$\alpha = \frac{5\pi^2}{\alpha^2 + \pi^2} - 4$

I have solved this equation and i got $\alpha \approx 0.738305\dots$. So , how can i compute it accurately ? Can i use taylor series, etc. ?

Jam
  • 10,325
  • 1
    How about some iterations with Newton's method, i.e. find the zero of $f(\alpha) = \cos(\alpha) - \alpha$? That should give good numerical results and could even yield a closed-form sequence that converges towards the dottie number. – GDumphart Mar 27 '15 at 12:05
  • What do you mean by "accurately"? Perhaps, you mean precisely? That is, correct to some number of digits. If so, how many digits? Are you interested provably correct digits? Or quick convergence? In any event, I suppose one natural approach would be to iterate the cosine function. – Mark McClure Mar 27 '15 at 12:49
  • 1
    This question seems a duplicate of:

    https://math.stackexchange.com/questions/46934/what-is-the-solution-of-cosx-x

    – giorgiomugnaini Jul 27 '21 at 18:57

7 Answers7

6

Using Newton's method,

$$\alpha = \alpha + \frac{\cos \alpha - \alpha}{\sin\alpha + 1}$$ Use this for a fixed-point iteration with chosen starting value ($\alpha_0 := \frac1{\sqrt2} = 0.7\color{red}{071}\ldots$ seems like a good choice)

Thus

$$\alpha_1 = \frac1{\sqrt 2} + \frac{\cos\frac1{\sqrt2}-\frac1{\sqrt2}}{\sin\frac1{\sqrt2} + 1} = 0.739\color{red}3\ldots \\ \alpha_2 = \frac1{\sqrt 2} + \frac{\cos\frac1{\sqrt2}-\frac1{\sqrt2}}{\sin\frac1{\sqrt2} + 1} + \frac{\cos\left(\frac1{\sqrt 2} + \frac{\cos\frac1{\sqrt2}-\frac1{\sqrt2}}{\sin\frac1{\sqrt2} + 1}\right) - \frac1{\sqrt 2} - \frac{\cos\frac1{\sqrt2}-\frac1{\sqrt2}}{\sin\frac1{\sqrt2} + 1}}{\sin\left(\frac1{\sqrt 2} + \frac{\cos\frac1{\sqrt2}-\frac1{\sqrt2}}{\sin\frac1{\sqrt2} + 1}\right) + 1} = 0.7390851\color{red}4\ldots$$

As you can see, it converges quickly. Only one more iteration gives

$$\alpha_3 = 0.73908513321516\color{red}1\ldots$$

Which is equal to $\alpha$ within the IEEE double precision standard. For $\alpha_0 = 0.7$ you need one more iteration for the same result, $0.739$ only requires two iterations and from $0.73908513$, one iteration is enough for double-precision.

AlexR
  • 24,905
  • This is what I'm looking for. But I can't understand Newton's formula clearly. $f(x) = \cos x - x$ okay but why $f'(x) = \sin x + 1$ ? Please explain, I'm beginner. – Oğuz İsmayil uysal Mar 27 '15 at 15:00
  • @Oğuzİsmayiluysal Acutally $f'(x) = -(\sin x + 1)$ but Newton's formula is $$x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}$$ So the negative signs cancel. Note that $\sin' = \cos, \cos' = -\sin$ and $x' = 1$. – AlexR Mar 27 '15 at 15:02
  • Okay, but still I couldn't get the logic of inverse functions. – Oğuz İsmayil uysal Mar 27 '15 at 15:58
  • 1
    @Oğuzİsmayiluysal It's not the inverse, it's the derivative! – AlexR Mar 27 '15 at 16:25
  • 1
    These words has same mean in my language. Excuse my bad grammer. I need information about derivation, at least may you give me a link ? – Oğuz İsmayil uysal Mar 27 '15 at 16:56
  • @Oğuzİsmayiluysal Try wikipedia it also has links to many other languages in the sidebar, so you can probably find the one for your mother tongue from there. – AlexR Mar 27 '15 at 16:59
  • Good catch, thanks :) The other numbers seem to be correct, I checked them again. – AlexR Aug 20 '22 at 08:17
3

Taylor series of order 2 gives a simple quadratic in $\alpha$: $$\alpha=1-\alpha^2/2\implies \alpha=0.\color{red}{73}2..$$ Of order 4 gives a bi-quadratic (there's a formula to solve roots of a polynomial of degree less than 5) in $\alpha^2$: $$\alpha=1-\alpha^2/2+\alpha^4/4\implies 0.\color{red}{739}2..$$ Fairly accurate for practical purposes wherein the correct value is $0.739085...$

RE60K
  • 17,716
  • 3
    Because the value is close to $\sqrt2/2$, I'd propose to use the Taylor series around $α=\pi/4$. Of course, this uses additional constants $\sqrt2$ and $\pi$. – Lutz Lehmann Mar 27 '15 at 12:57
2

An analytical form of x can be obtained solving Kepler equation:

$$M= E-\epsilon \sin(E)$$

with eccentricity=1 and mean anomaly = $\pi/2$ by means of Kapteyn series:

$$2x = \frac{\pi}{2}+\sum_{n=1} \frac{2J_n(n)}{n} \sin(\pi n/2)$$

where $J_n()$ are the Bessel functions. Simplifying:

$$2x = \frac{\pi}{2}+\sum_{n=0} \left( \frac{2J_{4n+3}(4n+1)}{4n+1} - \frac{2J_{4n+3}(4n+3)}{4n+3}\right)$$

$$x = \frac{\pi}{4}+\sum_{n=0} \left( \frac{J_{4n+1}(4n+1)}{4n+1} - \frac{J_{4n+3}(4n+3)}{4n+3}\right)$$

Such series can be numerically evaluated, but without some acceleration technique (see below), it converges slowly and n=10000 terms are required to obtain:

$$x = 1.154940317134$$

with

$$2x-\sin(2x)-\pi/2=-1.38017659479e-006$$

Thus $$2x-\pi/2=\sin(2x)$$ Let $y=2x-\pi/2=\sin(2x)$ $$\sin(2x) =\sin(\pi/2+y)=\sin(\pi/2-y)=\cos(y)$$ So $$\cos(y)=y$$ Hence $y$ is Dottie's number.

In order to improve the convergence, we can employ an acceleration series technique as Levin's acceleration. (See http://en.wikipedia.org/wiki/Series_acceleration)

With only 10 (ten!) terms we obtain:

$$x=1.1549406884223$$

A simple c++ code, based on gsl library is the following:

#include <iostream>
#include <fstream>
#include <iomanip>
#include "gsl_sf.h"
#include "gsl_sum.h"
using namespace std;

#include <cmath>

int main(int argc, char* argv[]) { double PIH = atan(1.)*2; cout<<setprecision(13); double E=PIH;

cout&lt;&lt;&quot;raw series&quot;&lt;&lt;endl;
//raw series
for( int i = 0 ; i &lt; 1e4; i +=2 )
{
    double term = 2*gsl_sf_bessel_Jn( 2*i+1, 2*i+1 )/(2*i+1);
    double term2 = 2*gsl_sf_bessel_Jn( 2*i+3, 2*i+3 )/(2*i+3);

    E += (term-term2);
}
cout&lt;&lt; E/2&lt;&lt;endl;

cout&lt;&lt; &quot;error: &quot;&lt;&lt;E-sin(E)-PIH&lt;&lt;endl;

//levin 
cout&lt;&lt;&quot;levin accelerated series&quot;&lt;&lt;endl;
const int N = 10;
double t[N];
double sum_accel=0, err;

gsl_sum_levin_u_workspace* w =
    gsl_sum_levin_u_alloc( N );

t[0] = PIH;
for( int i = 1 ; i &lt; N; i++ )
{
    double term = 2*gsl_sf_bessel_Jn( 4*i-3, 4*i-3 )/(4*i-3);
    double term2 = 2*gsl_sf_bessel_Jn( 4*i-1, 4*i-1 )/(4*i-1);

    t[i] = term-term2;
}

gsl_sum_levin_u_accel( t, N, w, &amp;sum_accel, &amp;err );

E=sum_accel/2;

cout&lt;&lt;sum_accel/2&lt;&lt;endl;

cout&lt;&lt;&quot;error: &quot;&lt;&lt;sum_accel-sin(sum_accel)-PIH&lt;&lt;endl;

}

PM 2Ring
  • 4,844
1

Let $a_{0}$ be the start input (it's better to choose one close to Dottie's Number). For example, I'll choose it as $a_{0} = 0.73$.

$$a_{n} = \cos(a_{n-1})$$ $$ \lim_{n \to \infty} a_{n} = \text{Dottie's Number}$$

Let's begin the calculations: $$a_{1} = \cos(a_{0}) = \cos(0.73) = 0.745174402...$$ $$a_{2} = \cos(a_{1}) = \cos(0.745174402...) = 0.734969653...$$ $$a_{3} = \cos(a_{2}) = \cos(0.734969653...) = 0.741851103...$$ $$a_{4} = \cos(a_{3}) = \cos(0.741851103...) = 0.737219118...$$ $$\dots$$ $$a_{55} = 0,739085133...$$

Of course it is not easy to do by hand, but for a computer it's a piece of cake.

WHoZ
  • 81
1

With $a_0=1$ and

$$a_{n+1}=\cos(a_n)$$

It must follow that

$$\alpha=\lim_{n\to\infty} a_n$$

Unfortunately, this converges extremely slowly. One can consider, more generally,

$$a(n,k,x)=\begin{cases}a(n,k-1,(a(n,k-1,x)+a(n+1,k-1,x))/2),&k>0\\a(n-1,0,\cos(x)),&n>k=0\\x,&n=k=0\end{cases}$$

From which it is easily seen that

$$\alpha=\lim_{n\to\infty}a(n,k,x)$$

For any $k\in\Bbb N$ and $x\in\Bbb R$.

A quick implementation is given: https://repl.it/LHoa/12

A few values of $a(n,k,1)$ are provided below:

   n | 0                   1                   2                   3  
k
_
0      1.0000000000000000  0.5403023058681398  0.8575532158463934  0.6542897904977791
1      0.7701511529340699  0.7655325029045684  0.7467463120692533  0.7437002334049706
2      0.7436266020461714  0.7403751852701087  0.7392909637993013  0.7391335465331998
3      0.7391414918542529  0.7390881161559403  0.7390852375353876  0.7390851380676647

α ≈ 0.7390851332151607, so $a(3,3,1)$ is accurate to 8 places. Estimating from the first few values, it appears $a(n,n,1)$ is accurate to $\alpha$ for the first $≈n^2$ places.

1

As @Lutz Lehmann wrote in comments, we have $$0=x-\cos(x)=\frac{\pi -2 \sqrt{2}}{4}+\left(1+\frac{1}{\sqrt{2}}\right)\left(x-\frac{\pi }{4}\right)+\frac{1}{\sqrt{2}}\sum_{n=2}^\infty \frac{(-1)^{\left\lfloor \frac{n+3}{2}\right\rfloor }}{n!}\left(x-\frac{\pi }{4}\right)^n$$ Truncating to some order and using series reversion $$x=\frac \pi 4+\sum_{n=1}^p \frac{a_n}{n!}\, z^n \qquad \text{with} \qquad z=\frac{2 \sqrt{2}-\pi }{4}$$

The first $a_n$ are $$\left( \begin{array}{cc} n & a_n \\ 1 & 2-\sqrt{2} \\ 2 & 14-10 \sqrt{2} \\ 3 & 300-212 \sqrt{2} \\ 4 & 10216-7224 \sqrt{2} \\ 5 & 478480-338336 \sqrt{2} \\ 6 & 28521088-20167456 \sqrt{2} \\ 7 & 2064360896-1459723584 \sqrt{2} \\ 8 & 175785969024-124299450752 \sqrt{2} \\ 9 & 17213908329728-12172071310592 \sqrt{2} \\ 10 & 1905546569787904-1347424901364224 \sqrt{2} \\ 11 & 235279360692440064-166367631418857472 \sqrt{2} \\ 12 & 32055293262298333184-22666515238694615040 \sqrt{2} \\ 13 & 4776761007035879493632-3377680100182551924736 \sqrt{2} \\ 14 & 772827653451233424957440-546471674443854294032384 \sqrt{2} \\ 15 & 134906205026964718073135104-95393092398709458563973120 \sqrt{2} \\ \end{array} \right)$$

Using only the above terms leads to an absolute error of $9.25\times 10^{-24}$

Edit

Using my favored $\color{red}{\large 1,400}$ years old approximation $$\cos(x) \simeq\frac{\pi ^2-4x^2}{\pi ^2+x^2}\qquad \text{for}\qquad -\frac \pi 2 \leq x\leq\frac \pi 2$$ solving the cubic equation $$x^3+4 x^2+\pi ^2 x-\pi ^2=0$$ $$x=-\frac{2}{3} \left(2+\sqrt{3 \pi ^2-16} \,\,\sinh \left(\frac{1}{3} \sinh ^{-1}\left(\frac{128-63 \pi ^2}{2 \left(3 \pi ^2-16\right)^{3/2}}\right)\right)\right)=\color{red}{0.7383051}84$$

1

I will use https://en.wikipedia.org/wiki/Lagrange_inversion_theorem. Did anybody use that? There are so many answers about this number.

Let $z=f(w)=\cos w-w$ and $w=a=\frac{\pi}{2}$. Since, $f'(a)=-2\neq 0$, the inverse function at $z=f(a)=-\frac{\pi}{2}$ can be expressed as
$$g(z)=a+\sum_{n=1}^{\infty}g_n\frac{(z-f(a))^n}{n!}=\frac{\pi}{2}+\sum_{n=1}^{\infty}g_n\frac{(z+\frac{\pi}{2})^n}{n!}$$ where $$g_n=\lim_{w\rightarrow a}\frac{d^{n-1}}{dw^{n-1}}\left(\frac{w-a}{f(w)-f(a)}\right)^n=\lim_{w\rightarrow \frac{\pi}{2}}\frac{d^{n-1}}{dw^{n-1}}\left(\frac{w-\frac{\pi}{2}}{\cos w-w+\frac{\pi}{2}}\right)^n$$ I was able to compute first few coefficients by hand, but then I had to use WalframAlpha. Even indexed coefficients are zero. Odd ones upto $g_9$ are:

$$g_{1}=-\frac{1}{2}, g_3=-\frac{1}{16}, g_5=-\frac{1}{16}, g_7=-\frac{43}{256}, g_{9}=-\frac{223}{256}...$$

Hence, $$g(z)=\frac{\pi}{2}-\frac{1}{2}(z+\frac{\pi}{2})-\frac{1}{96}(z+\frac{\pi}{2})^3-\frac{1}{1920}(z+\frac{\pi}{2})^5-\frac{43}{256\times 7!}(z+\frac{\pi}{2})^7-\frac{223}{256}\frac{(z+\frac{\pi}{2})^9}{9!}-...$$ Finally, Dottie number is $D=g(0)$. (Why?) And WolframAlpha computed as $D=g(0)\approx 0.739$. Unfortunately, since I am too lazy to compute $g_{11}$, the fourth-digit is false.

Martin R
  • 113,040
Bob Dobbs
  • 10,988