1

I'm working on a root-finding problem and dealing with a function with factorial. $\epsilon$ is a very small machine error, approximately $2 \times 10^{-16}$, which is a tolerance to prevent infinite iterations. And $x$ is a known value, which are $[0.1,1,10,100,1000]$. Therefore, there are five functions I need to find the root of $n$. $$ r = \frac{x^{n+1}}{(n + 1)!} \approx \epsilon $$ My idea is that

\begin{align} r &= \frac{x^{n+1}}{(n+1)!} \\ \log r &= \log \frac{x^{n+1}}{(n+1)!} \\ \log r &= (n+1)\log x - \log (n+1)! \end{align} $\log r$ should be a number around $-16$ and we can find the root. But the problem requires me not to use Stirling's approximation; moreover, I cannot use cumulative sum for $\log$ because $n$ can be a decimal number. Could someone provide some ideas how to continue this problem? Thank you very much. No need to worry about root finding algorithm. I just need to construct a doable function that I can deal with on Python.

2 Answers2

2

If you're extending factorials to real values, you'll probably want to look into the Gamma Function.

$\Gamma(n+1) = n!$ when $n$ is a natural number, but $\Gamma$ can be defined for all complex numbers except for the negative integers. In Python you can use the function lgamma from the math module in the standard library. lgamma computes the log of the absolute value of the Gamma function.

Something like this could work to compute the function you're interested in

import math

def log_r(n: float, x: float) -> float: return (n + 1) * math.log(x) - math.lgamma(n)

though whether or not the Gamma function is actually what you want may be use case dependent.

Edit: I neglected to mention. If you plan to use Newton's method, you'll also need to be able to calculate the derivative of lgamma. In Python you can use scipy.special.digamma from the 3rd party scipy package.

1

You want to solve for $n$ the equation $$ r = \frac{x^{n+1}}{(n+1)!}\implies (n+1)!=\frac{x^{n+1}}{r} $$

If you look at this question of mine, you will see a magnificent approximation proposed by @robjohn.

Adapted to your problem, the solution is $$n \sim x \, e^{1+W(t)}-\frac 3 2 \qquad \text{with} \qquad t=-\frac{\log \left(2 \pi r^2 x\right)}{2 e x}$$ $W(t)$ being Lambert function.

Let us try for $x=\pi$ and $r=10^{-20}$; this will give, as a real, $n=31.4795$ while the exact solution obtained using Newton method is $n=31.4800$.

If you cannot afford Lambert function, use the approximation $$W(t)\approx L_1-L_2+\frac{L_2}{L_1}+\frac{L_2(L_2-2)}{2L_1^2}+\frac{L_2(2L_2^2-9L_2+6)}{6L_1^3}+\cdots$$ where $L_1=\log(t)$ and $L_2=\log(L_1)$.

For the worked example, this would give $W(t)=1.35303$ while its exact value is $1.35116$.

Edit

Misreading, as @AlbertSteppi commented, I used Lambert function (what you did not want).

The only other way is a root-finding procedure and, for easiness, let us apply Newton method to find the zero of function $$f(n)=\log (\Gamma (n+2))-(n+1)\log(x)+\log(r)$$ $$f'(n)=\psi (n+2)-\log (x)$$ So, the iterates will by $$n_{k+1}=n_k -\frac{\log (\Gamma (n_k+2))-(n_k+1)\log(x)+\log(r)}{\psi (n_k+2)-\log (x) }$$ The problem is to find a "reasonable" value of $n_0$.

For the range $0 \leq n \leq 100$ a quick and dirty regression $$\log (\Gamma (n+2)) \sim n ^a$$ gives $(R^2=0.9999782)$ $$\begin{array}{llll} \text{} & \text{Estimate} & \text{Standard Error} & \text{Confidence Interval} \\ a & 1.28400 & 0.00011 & \{1.28378,1.28421\} \\ \end{array}$$

So, a starting point could be $$n_0=\big[-\log(r)\big]^{\frac 1a}$$

Trying with the previous problem, this would give the following iterates $$\left( \begin{array}{cc} k & n_k \\ 0 & 19.741 \\ 1 & 32.930 \\ 2 & 31.493 \\ 3 & 31.480 \end{array} \right)$$