3

Here is a very closely related, but NOT a duplicate question.

One time, I decided to test how long it takes to compute $2^x$ compared to $e^x$ in Python. I expected $2^x$ to be faster as in binary, the base computers use, you can just append 0 to the number 1 $x$ times. I entered into Python IDLE (Python's Integrated Development and Learning Environment):

>>> import timeit
>>> timeit.timeit('2**100')
>>> 0.24405850004404783
>>> timeit.timeit('e**100','e=2.718281828459045')
>>> 0.10122330003650859

and found that $e^x$ was about twice as fast to compute, despite not being an integer. Why does this happen? (Note that this only happens for large numbers.) The only reason I can think of is that $e^x$ can be easily calculated using a MacLaurin series as $\frac d{dx} e^x$ is equal to $e^x$.

John L.
  • 38,985
  • 4
  • 33
  • 90
  • Using 3.5.2 on an ancient clunker, I get the reverse relation even using timeit.timeit('total += e**100', "e, total =2.718281828459045, 0"): What is your platform? – greybeard May 14 '23 at 08:51
  • @greybeard I am using IDLE (meaning python 3.10.11) and running it on a Windows 11 laptop. Model name for laptop is ideapad Flex 5. – The Empty String Photographer May 14 '23 at 09:00
  • >>> timeit.timeit('2.0**100') >>> 0.0073371000000008735 – Nathaniel May 14 '23 at 10:04
  • Compare the outputs of both calculations. You get one exact result an one approximation. The "just append 0 bits" fails because the number gets bigger than 64 bit (common register size for integers) can handle. – Jasper May 14 '23 at 10:47
  • I do not believe it's actually using taylor series for computing large exponential anyway. – Lelouch May 14 '23 at 13:27
  • I don't think we have a good answer on this at the moment, but you should never use Taylor series for approximation. They approximate a function at a point, where you almost certainly want an approximation on an interval. Taylor series are for analysis, and therefore can be a good starting point for making quality approximations. But also look up Chebyshev approximation and the Remez exchange algorithm. – Pseudonym May 15 '23 at 00:23

2 Answers2

5

The reason is simple: 2**100 returns a bignum, with full accuracy. There is more work to handle the bignum representation than mere binary shifts. On the opposite, e**100 returns a float and uses the built-in power function of the processor.

>>> from timeit import timeit
>>> timeit('2**100')
0.20831729998462833
>>> timeit('2.0**100')
0.008533199987141415
>>> timeit('2.718281828459045**100')
0.008684300002641976
>>> timeit('e**100', 'e=2.718281828459045')
0.10991410000133328
>>> timeit('e**100', 'from math import e')
0.10929809999652207
John L.
  • 38,985
  • 4
  • 33
  • 90
  • Can you explain how the built-in function is faster? – The Empty String Photographer May 14 '23 at 09:45
  • 1
    @PlaceReporter99: bignums are not built-in, they are emulated in software. –  May 14 '23 at 09:58
  • If that's the explanation it's a minor miracle that the difference is only a factor of 2. Mind, I'm not doubting you. e**100 is still well within floating point range. In fact now I'm wondering if the time for that is not overly long. – Victor Eijkhout May 14 '23 at 15:31
  • @VictorEijkhout: why ? $200$ or $100$ nanoseconds seem reasonable. –  May 14 '23 at 15:36
  • Am I misreading the Q? 1/10th second for computing e**100? That would be 10^{-3} seconds per multiplication in the most naive algorithm. – Victor Eijkhout May 14 '23 at 15:42
  • @VictorEijkhout: come on, realize that the unit is µs. –  May 14 '23 at 15:47
  • Then it's amazing that the bigint calculation is only twice slower. – Victor Eijkhout May 14 '23 at 16:04
  • 1
    I always thought that timeit gave values in seconds. – The Empty String Photographer May 14 '23 at 16:06
  • @PlaceReporter99: read the manual... A tenth of a second would make absolutely no sense. –  May 14 '23 at 16:10
  • @PlaceReporter99 It does give a result in seconds, but that result is the total for a certain number of repetitions. The default number of repetitions is 1,000,000, so the result can also be interpreted as the time for one execution in microseconds. – kaya3 May 15 '23 at 03:29
  • @VictorEijkhout: right, after all an integer multiply is not much faster than a floating-point one. –  May 15 '23 at 06:02
  • These are not integer multplies. You yourself state that they are bigint computations which are done in software. I was expecting that to be much slower. – Victor Eijkhout May 15 '23 at 13:59
  • @VictorEijkhout: these multiplies do use the ALU multiplier. For more, you should refer to Python's source code. –  May 15 '23 at 14:34
3

You are not comparing the same operations. You are comparing two operations that look very similar in source code, but they are very much different.

2**100 takes an integer 2, and calculates that integer raised to the 100th power using unlimited precision integer arithmetic. If you tried 2*1000000 you would get a number with about 300,000 digits containing the exact value of 2**100.

e**100 with e = 2.718281828459045 takes a floating-point number and raises it to the 100th power using limited floating-point precision. This will not give you more than about 15 digits precision and will fail with overflow when the exponent is about 800 or so. It is a total different operation. So obviously takes a very different amount of time.

gnasher729
  • 29,996
  • 34
  • 54