6

I was researching the topic of Fibonacci numbers and asymptotic complexity of generating them. Coming across a seemingly paradoxical conclusion, I've decided to check out if you agree with my explanation.

  1. The naive algorithms runs in $O(n)$, if we ignore the cost of addition.

  2. The Binet's formula or matrix exponentiation method should both theoretically run in $O(\lg(n))$ (since exponentiation of both matrices and real numbers takes $O(\lg(n))$ steps)

The problem arises when you analyze the size of Nth Fibonacci number by assuming that after first few members of the sequence, the ratio between two numbers is at least 1.5 (picked randomly, I assume it can be easily proved by induction).

We can then bound the number to be at least as big as: $c_1*(1.5)^n$. Its logarithm gives us the number of digits the Nth Fibonacci number has ($c_2*n$). Am I right to assume that you can't print out (or calculate) linear number of digits in sublinear time?

My explanation of this "paradox" is that I forgot to add multiplication costs in 2nd algorithm (Binet/matrix), making its complexity $n*\lg(n)$. I've found out that naive algorithm (1st) runs better for very small inputs, and 2nd algorithm runs better for bigger ones (python3).

Is my explanation of complexity correct, and should the naive algorithm get better running time at even larger inputs ($n>10^9$ or such)?

I do not consider this to be a duplicate question, since it adresses the problem of arbitary values and arbitary integer aritmetic.

Petar Mihalj
  • 148
  • 1
  • 8
  • 1
    Welcome to CS.SE! Take a look at https://en.wikipedia.org/wiki/Model_of_computation#Uses, http://cs.stackexchange.com/q/1643/755, http://cs.stackexchange.com/q/32736/755, http://cs.stackexchange.com/q/33918/755. – D.W. Aug 08 '16 at 16:37
  • Thanks for the links, very useful. The thing that still bothers me tho is the fact that multiplication should in practice take $O(n)$ times more time than addition, or am I wrong?

    I think i got it, that O(n) times more is true, but it is only "constant", since we have fixed size (32 bit) integers.

    – Petar Mihalj Aug 08 '16 at 16:55
  • https://en.wikipedia.org/wiki/Multiplication_algorithm – D.W. Aug 08 '16 at 17:35
  • 2
    The titular question seems to be a duplicate; community votes, please. – Raphael Aug 08 '16 at 18:57
  • 1
    @PetarMihalj If you only consider fixed size integers computing any function is O(1) because you have only a finite number of possible inputs. By the way: you mention that you are using python. Well python integers can get unbounded in size and the multiplication uses Karatsuba for big operands. – Bakuriu Aug 08 '16 at 21:05
  • 1
    The ratio between successive Fibonacci numbers tends to the golden ratio, which is about $1.618 > 1.5$. – David Richerby Aug 08 '16 at 22:52
  • @DavidRicherby I have used $1.5$ to demonstrate lower bound on length of Nth Fibonacci number. Should have used alfa-notation. – Petar Mihalj Aug 08 '16 at 23:35
  • 2
    @PetarMihalj I was confirming that 1.5 is a valid lower bound! – David Richerby Aug 08 '16 at 23:52

2 Answers2

10

Let's assume that you could store such large Fibonacci numbers. In that case, the length of $n^{th}$ Fibonacci number is $\lfloor n \log_{10}\phi+1\rfloor$. That is the length of the $n^{th}$ Fibonacci number is $\Theta(n)$.

If you go by the naive method, then calculating the $n^{th}$ Fibonacci number would cost you $\Theta(n^2)$ time.

In comparison, matrix exponentiation will take $O(n^2\log n)$ time, assuming you use schoolbook long multiplication to multiply numbers. (A slightly more refined analysis shows that it is in fact $\Theta(n^2)$, by summing a geometric series.) But if instead we use Karatsuba multiplication , the running time to compute the $n$th Fibonacci number using matrix exponentiation becomes $O(n^{1.585}\log n)$. (Analogous to before, a slightly more refined analysis shows that the running time is in fact $\Theta(n^{1.585})$.) Thus, if you use sophisticated multiplication algorithms, matrix exponentiation would be better for large $n$ compared to the naive algorithm.

In short: which method is better depends on which multiplication algorithm used.

D.W.
  • 159,275
  • 20
  • 227
  • 470
advocateofnone
  • 2,962
  • 1
  • 26
  • 43
  • 1
    And, of course, multiplication can be solved in arbitrary superlinear time, so you can get this down to $\Theta(n^{1 + \varepsilon})$ for any $\varepsilon > 0$ you fancy. – wchargin Aug 09 '16 at 06:47
  • @wchargin Do you mind linking some articles about it? – Petar Mihalj Aug 09 '16 at 09:51
  • @PetarMihalj Basically, Karatsuba's algorithm breaks its inputs into four chunks, does only three recursive calls instead of four, and thus becomes $n^{\log_2{3}}$. If you chop your inputs into three chunks, and do only five recursive calls instead of nine, you get $n^{\log_3{5}}$. With more and more chunks, you approach unity. For more, see: https://en.wikipedia.org/wiki/Toom%E2%80%93Cook_multiplication, or The Nature of Computation section 2.3. – wchargin Aug 11 '16 at 03:07
-1

This formula (from wikipedia) may be much faster, depending on how you manage floating point and exponentiate. The first term dominates.

$$ F_n = \frac1{\sqrt{5}}\cdot\left(\frac{1+\sqrt{5}}2\right)^n - \frac1{\sqrt{5}}\cdot\left(\frac{1-\sqrt{5}}2\right)^n\,.$$

(From Wikipedia.)

David Richerby
  • 81,689
  • 26
  • 141
  • 235
Ethan Bolker
  • 167
  • 1
  • 5
  • 2
    Hint: Don't use floating point, do your calculations in the field $\mathbb{Q}\left[\sqrt{5}\right]$. – Pseudonym Aug 09 '16 at 01:38
  • @Pseudonym: Would you mind explaining the notation? I don't know what Q[5] means even though I know what Q, [5], and Field mean separately. – user541686 Aug 09 '16 at 04:51
  • @Mehrdad $\mathbb{Q}[X]$ is the ring of polynomials in terms of $X$ with coefficients in $\mathbb{Q}$. $\mathbb{Q}[\sqrt{5}]$ is the set of numbers that are a result of these polynomials evaluated at $X=\sqrt{5}$. You essentially end up with $\mathbb{Q}[\sqrt{5}] = {a+b\sqrt{5}\ |\ a,b \in \mathbb{Q} }$. Instead of converting to floating point and losing precision, it suffices to store the values $a$ and $b$. – mdxn Aug 09 '16 at 05:12
  • @Mehrdad Well, he is only suggesting that we do the computation on some representation of numbers in $\mathbb{Q}[\sqrt{5}]$ in a manner that is succinct and preserves precision. Storing the coefficients in a linear combination is just one approach to encoding these numbers. There may be other preferable encoding schemes with a different underlying data structure that still retain the same algebraic structure (field arithmetic in $\mathbb{Q}[\sqrt{5}]$). – mdxn Aug 09 '16 at 05:35
  • 2
    This answer is not a good idea. If you use regular floating point arithmetic for this, you will get incorrect answers when $n$ is large, because regular floating point arithmetic doesn't have enough precision. If you use fixed-point arithmetic with just the right precision, then (contrary to what you claim) the running time will be no faster than the matrix-based methods... plus you have extra headaches to deal with, to try to figure out how many digits of precision are needed for all intermediate results. Not a good approach. – D.W. Aug 09 '16 at 06:55
  • 2
    Well, you can't get the exact number in time better than O (n) because the result is O (n) digits. This formula gives you a very fast approximation. It's an excellent idea if you want an approximation. – gnasher729 Aug 09 '16 at 10:56
  • If you expand this answer out (just using the binomial theorem) you get an exact sum, linear in $n$. – Mark Hurd Aug 13 '16 at 12:29
  • If floating-point arithmetic and getting only an approximation for larger n is good enough for you, then you know that the exact result is always an integer, the second term is always less than 0.5, so you can drop the second term and round the result to the nearest integer. This also avoids embarrassing results like claiming that fib(n) = 54.99999999937 for some n which could easily happen with floating-point arithmetic. In that particular case the first term gives something close to 55, maybe 55.0148399, which is then rounded to the correct 55. – gnasher729 Jan 03 '23 at 12:28