It was while fiddling yesterday that I came up with this rather pretty approximation:
$$\log x = \frac{1}{2\epsilon}(x^{\epsilon}-x^{-\epsilon})+\mathcal{O}(\epsilon^2)$$
To be more precise,
$$\log x = \frac{1}{2\epsilon}(x^{\epsilon}-x^{-\epsilon})+\frac{1}{6}\epsilon^2(\log x)^3+\frac{1}{5!}\epsilon^4(\log x)^5+\mathcal{O}(\epsilon^6)$$
The expansion most elegantly results from looking at $f_x(\epsilon)=x^{\epsilon}$ which has derivatives $f_x^{(n)}(\epsilon)=(\log x)^nx^{\epsilon}$, and Taylor expanding about $\epsilon=0$.
Computationally, use the following algorithm:
0) Common to most algorithms is to divide $x$ by $2$ until $x$ is between $1$ and $2$, subtracting $\log 2$ from the answer each time. Use this as our starting point, i.e. $x \in (1,2)$
1) Multiply $x$ with itself to get $x^2$, then $x^2$ with itself etc to get $y=x^{2^n}$ in $n$ multiplications.
2) Calculating $a_1=\displaystyle \frac{1}{2^{n+1}}(y-1/y)$ then requires 3 more operations. $\log x=a_1+\mathcal{O}(2^{-2n})$, where for large $n$, $\mathcal{O}(2^{-2n}) \leq \frac{(\log 2)^3}{6}2^{-2n} \approx \frac{2^{-2n}}{20}$.
We could stop here: in $n+3$ multiplications we have generated roughly $2n+4$ bits of accuracy, i.e. roughly $2$ bits per operation. But we can do this:
3) Can use $a_1$ as an approximation for $\log x$ in the expansion. For sufficiently accurate $a_1$ this is valid and have $\log x =a_1+\frac{2^{-2n}}{6} (a_1)^3+\mathcal{O}(2^{-4n})$, where $\mathcal{O}(2^{-4n}) \leq \frac{(\log2)^4}{120} 2^{-4n} \approx \frac{2^{-4n}}{800}$.
So in $n+9$ operations generate $4n+9$ bits of accuracy. So for 53 bits of accuracy (roughly 16 decimal places) can take $n=11$ and use $20$ operations. I don't know exactly how good this is but it seems decent to me. For reference, see this question for some info on how it's done.
The question is, for those more knowledgeable than me, is this actually any good?