10

How can I replace the $\log(x)$ function by simple math operators like $+,-,\div$, and $\times$?

I am writing a computer code and I must use $\log(x)$ in it. However, the technology I am using does not provide a facility to calculate the logarithm. Therefore, I need to implement my own function using only simple operators ($+,-,\div$, and $\times$).

Thank you.

Nabeel
  • 201

7 Answers7

21

Contrary to popular belief, you can do better that power series. The trick is in the use of continued fractions and the related Padé approximants.

One continued fraction for the logarithm (due to Khovanskiĭ) goes a bit like this:

$$\log(1+z)=\cfrac{2z}{z+2-\cfrac{z^2}{3(z+2)-\cfrac{4z^2}{5(z+2)-\cfrac{9z^2}{7(z+2)-\cdots}}}}$$

The beauty of this is that it has a wider domain of applicability: it is valid as long as $|\arg(1+z)| < \pi$.

One can use the Lentz-Thompson-Barnett method on this CF, of course, but one could also choose to exploit argument reduction here, by suitably exploiting the identity $\log(ab)=\log\,a+\log\,b$. If you take that route, you can be justified in just using a truncation of the continued fraction. That truncation is what's called a Padé approximant.

I'll edit later with more details if needed.

14

In the interest of demonstrating that there is more than one way to skin a cat, I display here Borchardt's algorithm for computing $\log\,x$, which is a modification of the more conventional arithmetic-geometric mean iteration:

$a_0=\dfrac{1+x}{2};\quad b_0=\sqrt{x};$
$\text{repeat}$
$a_{k+1}=\dfrac{a_k+b_k}{2}$

$b_{k+1}=\sqrt{a_{k+1}b_k}$

$k=k+1$
$\text{until }|a_k-b_k| < \varepsilon$
$\log\,x\approx 2\dfrac{x-1}{a_k+b_k}$

This of course presumes that you have a square root function available, but that remains doable even with your restrictions.

If you find that the convergence rate is too slow for your taste, Carlson shows that you can use Richardson extrapolation to speed up the convergence in the article I linked to. From my own experiments, I've found that the convergence rate of the unmodified Borchardt algorithm is already pretty decent, so unless you do need the speed-up (and remembering that Richardson extrapolation requires an auxiliary array for its implementation, which adds to the storage cost), vanilla Borchardt is fine as it is.

  • 2
    If you are stuck on a remote island, with a shipload of pencils, paper, and.... uh.... erasers, you can create your own log table. All you need is the log(2), log(3), and log(3) to great precision. To find the log of a number, find a nearby number that can be factored into powers of 2, 3, and 10. If you are close enough, you can get by with the above a0. Which is more work-- factoring, or square root? And on that island, don't let the native girls distract you from building your own log table! Be steadfast, like Napier, not fun loving like Lagrange! – richard1941 Jan 25 '17 at 05:01
  • The formula for $b_{k+1}$ should be $\sqrt{a_kb_k}$, right? – Théophile Apr 22 '21 at 02:49
  • @Théophile, no, what you describe is the classical AGM. That very change from the classical form is why the Schwab-Borchardt algorithm works; see the very first paper I linked to in this answer. – J. M. ain't a mathematician Jun 10 '21 at 01:32
5

Curiously, nobody proposed the CORDIC algorithm that was very useful when the 'price' of multiplication and division was high and/or the CPU limited.
The trick is to use a precomputed table of logarithms (say $\ln(10), \ln(2), \ln(1.1), \ln(1.01)... \ln(1.000001)$) and compute any logarithm using only addition/subtraction and shift operations (code example here).
A little late but...

Raymond Manzoni
  • 43,021
  • 5
  • 86
  • 140
2

If number $N$ (base 10) is $n$-digit then

$$n-1 \leq \log_{10}(N) < n$$

Then logarithm can be approximated using

$$\log_{10}(N) \approx n-1 + \frac{N}{10^{n} - 10^{n-1}}$$

Example,

$\log_{10}(53) = 1.72427587 $

Here $n= 2, N=53$ then,

$$\log_{10}(53) = 2 -1 + \frac{53}{100-10}=1.588888$$

Logarithm maps numbers from 10 to 100 in the range 1 to 2 so log of numbers near 50 is about 1.5. But this is only a linear approximation, good for mental calculation and toy projects but not that good for serious research.

1

Use Newton's method to solve e^x=y .that x is ln(y) ; for e^x use Maclaurin (taylor series centred in x_0=0) series( in interval [0,1]( you need to take y in interval [1,10] so x is in interval [0,ln(10)=2.302585093] and you need e=2.718281828 and e^2=e*e some elementary proprieties make a rank reduction .e^x=1+x+x^2/2!+x^3/3!+....x^n/n!,convergent every where in R (real number set). Make an Minimax (rational fraction because error vanishes more rapid) for ln(x) in interval [1,10-]values of x with maple or any soft (mathematica);

1

The Wikipedia article Generalized continued fraction has a Khovanskiĭ-based algorithm that differs only in substituting $x/y$ for $z$, and showing an intermediate step:

$$ \log \left( 1+\frac{x}{y} \right) = \cfrac{x} {y+\cfrac{1x} {2+\cfrac{1x} {3y+\cfrac{2x} {2+\cfrac{2x} {5y+\cfrac{3x} {2+\ddots}}}}}} = \cfrac{2x} {2y+x-\cfrac{(1x)^2} {3(2y+x)-\cfrac{(2x)^2} {5(2y+x)-\cfrac{(3x)^2} {7(2y+x)-\ddots}}}} $$

Glenn
  • 31
0

Here I use log to mean logarithm base 10.

Here is a quick, iterative method to compute $\log x$ for any $1 \le x \le 10.$

[INITIALIZE] Let $n = 0$. Define

$$\begin{array}{ccc} xl_0 = 1& xm_0=\sqrt{10} & xr_0=10 \\ yl_0 = 0& ym_0=0.5 & yr_0=1 \end{array}$$

[ITERATE] Compare $xm_n$ to $x$. If they satisfy your favorite criterion for "close enough", then $\log x = ym_n$ and we are done. Otherwise compute the following and then assign $n\to n+1$.

If $xm_n < x$,

$$\begin{array}{ccc} xl_{n+1} = xl_n& xm_{n+1}=\sqrt{xl_n \cdot xm_n} & xr_{n+1}=xm_n \\ yl_{n+1}=yl_n& ym_{n+1}=(yl_n+ym_n)/2 & yr_{n+1}=ym_n \end{array}$$

If $xm_n > x$,

$$\begin{array}{ccc} xl_{n+1} = xm_n& xm_{n+1}=\sqrt{xm_n \cdot xr_n} & xr_{n+1}=xr_n \\ yl_{n+1}=ym_n& ym_{n+1}=(ym_n+yr_n)/2 & yr_{n+1}=yr_n \end{array}$$

This is an extremely simple program to write and it returns reasonably accurate values of $\log x $ for $1 \le x < 10$. If you need $\ln x$, just use $\ln x = \dfrac{\log x}{\log e}$

You might also find THIS interesting. Just scroll down to "An Algorithm For Logarithms".