I need a confirmation of a thing that probably is silly.
Let $x$ a floating point number representable using $e$ bits for exponent and $m$ bits for mantissa, let $f$ a be an elementary function, you can suppose if it helps $D(f) = A = [1,2)$ and $f(A) = [1,2)$, you can transform any elementary function such that is defined in the intervals I just specified. Assume I've implemented a an algorithm $\psi$ such that $\psi$ approximates $f$ in the same floating point system.
I was wondering except for trivial cases the MINIMUM accuracy I could achieve by such algorithm, whatever it is. The answer should trivially be $0.5 ulp$ right?
My answer is motivated by the following:
The set of floating point numbers I've defined is finite, so trivially I can implement the computation of $f$ as $\psi(x) = \circ(f(x))$, where with $\circ(\cdot)$ I denote the rounding operation. So I can trivially sample the original function through all the floating points numbers, round the result, and store the result. I can have two situations:
- $f(x)$ is a floating point number, in such a case the error is $0$, this is what a would call trivial.
- $f(x)$ is not a floating point number, in such a case the rounding would provide me 0.5 ulp of accuracy. This is not trivial.
So because of this the min accuracy I can achieve is 0.5 ulp, right? It's a theoretical lower bound in non trivial situation what I'm looking for.