2

I'm sure this is not a challenge for you but it remains an open question for me:

Is it wise to prefer a recursive algorithm over its for-loop counterpart?

E.g. take the evaluation of the natural logarithm of a number $N + 1$

$$ \ln (N+1) = \ln N + \Delta N, $$

where

$$ \Delta N = 2 \sum_{k=0}^{\infty} \frac{1}{(2k+1)(2N+1)^{2k+1}}. $$

While one could implement it as a recursive function with an appropriate terminating condition (e.g. relative error tolerance), you could as well put it in a for-loop and break when desired.

Max Herrmann
  • 153
  • 8
  • 5
    Given a good enough compiler, you might even get the very same object code. And unless circumstances are exceptional, your time is much more valuable than the computer's, so think first in terms of ease of writing/understanding the program. – vonbrand Sep 09 '15 at 09:32
  • 1
    @vonbrand On the other hand, don't trust the compiler blindly. If fast code really matters, check the compilers work yourself. It can be surprising how dumb it can be as well. – Juho Sep 09 '15 at 09:37
  • @vonbrand So given a smart compiler and my recursion structure, it would nevertheless translate it to a for loop? – Max Herrmann Sep 09 '15 at 09:56
  • Converting tail recursion to a loop is a compiler code improvement of old. Don't hold your breath for a compiler to convert the direct recursive definition of Fibonacci's series to "the logarithmic approach". – greybeard Sep 09 '15 at 11:11

5 Answers5

5

The answer will depend on the compiler. As @vonbrand wrote, "Given a good enough compiler, you might even get the very same object code." In particular, good compilers will do tail-call elimination. In some cases this can effectively transform the code into a for-loop. Your example looks like a good example of an instance where this could happen.

As @vonbrand says, "And unless circumstances are exceptional, your time is much more valuable than the computer's, so think first in terms of ease of writing/understanding the program". Or, to quote Tony Hoare and Donald Knuth: "Premature optimization is the root of all evil." To this I'd add, focus first on algorithm and data structure improvements (where one can often make the biggest improvements) and on user needs: and then if performance of this particular part of the code is really critical, try both a loop and recursion and measure. Don't rely on your intuition; measure.

D.W.
  • 159,275
  • 20
  • 227
  • 470
  • 2
    And focus on algorithm (and data structure) improvements, there is where the most gain is to be had. Only once the algorithm can't be improved any more go for code changes, and if even so it falls short you may grab your assembler. As the saying goes: "General guide to code optimization: Just don't do it. For real experts: Don't do it (yet)." – vonbrand Sep 09 '15 at 19:50
  • The linked article is brilliant. Thank you! – Max Herrmann Sep 10 '15 at 06:28
2

I don't think recursive OR for-loop are related to the abstract idea of an algorithm, rather both are a specific strategy to implement an algorithm on a computing system. So your question is basically about which implementation strategy is better for algorithm - recursive or loop based.

The answer (assuming you want to implement the algorithm on general purpose of the shelf CPU) would be for-loop perform better as the recursive call would include the overhead of call stack which will grow for each recursive call.

Ankur
  • 628
  • 3
  • 12
  • Is writing/reading the stack more expensive than changes in program flow? – Max Herrmann Sep 09 '15 at 09:37
  • 1
    Along with stack operations there will be changes in program flow in recursion coz you are calling a procedure. – Ankur Sep 09 '15 at 09:48
  • 4
    This answer overlooks the fact that some compilers can remove the recursive call, eliminating all overhead related to the call stack. As vonbrand says, given a good enough compiler, you might even get the very same object code. – D.W. Sep 09 '15 at 16:14
  • 1
    I tried to explain the concept of recursion and loops in general term without including any other optimizations that can happen down the line like compiler optimizing or even some CPU level optimizations – Ankur Sep 10 '15 at 06:45
2

It is commonly agreed that loops (for or while) lead to code that is a bit faster than equivalent code based on recursive calls.

However, this speed improvement is small and should only be sought for crucial loops, that is, loops that will be run many millions of times. For the rest of the code (probably 99% of it) clarity, robustness, ease of debugging and modifying should be the priority. Choose recursion when it is clearer, loops when they are clearer. In the long run clearer code will run faster, sooner and for a longer time, than incomprehensible code.

phs
  • 861
  • 5
  • 7
  • 3
    "It is commonly agreed" - Is it, really? This depends heavily on the compiler. With some compilers I'd expect there might be no difference, thanks to tailcall elimination etc. – D.W. Sep 09 '15 at 16:15
  • You're right and that point has already been made in other answers or comments. However I'd suggest to avoid this issue since it is a bit distracting. My answer is that one should almost never argue about such tiny speed differences. That conclusion is even more valid in situations where good compilers make the speed differences disappear totally. – phs Sep 09 '15 at 19:23
  • @D.W., if your compiler can't do simple code improvements like tail call elimination, better change your compiler. Cheaper in the long run than mindless code bumming, and probably much more efficient all around to boot. – vonbrand Sep 09 '15 at 19:44
  • 1
    @vonbrand, fair enough! That makes my point even stronger. To state the point more pointedly: I don't think it's commonly agreed that loops lead to code that is faster than recursion, and I don't think that's necessarily true in all cases. This answer might be stronger if the first sentence was rewritten to say that while loops might lead to code that is a little bit faster than recursion in some cases, even in the best case where there is a difference, the difference is very small. Therefore, ease of debugging and modifying should be the priority. – D.W. Sep 09 '15 at 21:00
1

In general a true recursive code will execute slower than a loop based counterpart. That said as others have mentioned many compilers have native capabilities that change recursive code into loop based instructions.

That said I feel another point to bring up is that recursive code can lead to memory leakage in the case that it doesn't terminate whereas its loop based counterpart does not entail this.

  • 2
    The loop counterpart goes infinite if it doesn't finish, and recursive function does not leak itself, it is on stack, to clean stack you need to move pointer... In general good recursive function is compiled into loop, and leaking code is leaking regardles of code structure. – Evil Sep 10 '15 at 00:58
1

In support of the argument that clarity shall be addressed, I only wanted to add to the discussion the fact that some functions are inherently recursive (and they belong to the bottom of the hierarchy of recursive functions known as primitive recursive functions). In fact, a good example of such functions are super-exponential functions (e.g., $n^{m^u...}$ where $n$, $m$ and $u$ are related to the problem size) where one could not easily imagine a simple way to implement them properly in a loop (or a number of nested loops), in particular if they consist of a recursive invocation where at least one argument is also solved recursively.

One of such examples is the Ackerman's function. This function is solved according to three different cases. The third one solves Ackerman(m,n) as Ackerman (m-1, Ackerman(m, n-1)). This video (The most difficult program to compute?) provides an excellent introduction to these concepts and others. Note that the Ackerman's function is decidable and one can easily prove that it necessarily halts provided that its arguments are non-negative. Computing the values of this function takes just an inconceivable amount of time.

Cheers,

Carlos Linares López
  • 3,463
  • 15
  • 30
  • This sheds a new light on the topic. In fact, it helps me to abandon the thought that recursions are nothing but a coding preference. Highly appreciated. – Max Herrmann Sep 10 '15 at 22:06
  • 1
    "one could not easily imagine a way to implement them properly in a loop" - Actually, one can easily imagine a way to implement them with for loops. Of course you can implement those algorithms with for loops instead of recursion. You can always convert any recursive algorithm to one without recursion (e.g., by using an explicit stack, and converting function calls to pushes/pops on that stack). See http://cs.stackexchange.com/q/41404/755 and http://cs.stackexchange.com/q/13139/755. – D.W. Sep 10 '15 at 22:11
  • @Max Hermann: Happy to hear that. – Carlos Linares López Sep 10 '15 at 22:14
  • @D.W.: true when recursive functions do not include as one of their arguments another recursive invocation as in the case of the Ackerman's function. It is not that you cannot do it, but implementing the stack into another stack becomes far more difficult. – Carlos Linares López Sep 10 '15 at 22:14
  • You might want to edit your answer to clarify what you are saying, as the answer says "one could not easily imagine a way" -- which is incorrect. There is a way, and it's not even hard to imagine the way. (I'm not even convinced by "far more difficult"; the transformation is mechanical.) Also I don't think there's any objective sense in which those functions are "inherently recursive"; what would your definition of "inherently recursive" be? ("Easier to implement recursively" can be a bit subjective; what's easier for one person isn't necessarily the same for another.) – D.W. Sep 10 '15 at 22:16
  • e.g., If $f(n,m)$ is computed from $f(n-1,m-1)$ then you can easily simulate the recursion stack. Now, if $f(n,m)$ is computed recursively from $f(f(n-1,m), f(n,m-1))$ then you have to simulate two recursion stacks into the main one. That's pretty difficult. It is not impossible but right now I find that "hard to imagine". How would you do it? What is the systematic way to emulate two nested recursion stacks as required in the Ackerman's function? – Carlos Linares López Sep 10 '15 at 22:19
  • @D.W. I edited my answer as requested – Carlos Linares López Sep 14 '15 at 21:28
  • Having multiple arguments is not a barrier to implementing the Ackerman's function with a loop. The transformation I mentioned in my comment works even if the recursive invocation has multiple arguments. Again, you use an explicit stack; each recursive invocation becomes pushing a record on the stack (e.g., the list of all parameters, plus where to continue the computation). If you think about it, your compiler ultimately compiles any recursive algorithm down to assembly language, and your CPU has no notion of recursion built into it, so the compiler is already implementing this transform. – D.W. Sep 14 '15 at 21:33
  • Could you please provide a link or a reference to how to simulate recursive functions with recursive arguments? (I'm well aware of the technique for simulating plain recursive functions but not for super-exponential functions) I do agree with the author of the video I mention that this function is particularly difficult to do iteratively ... – Carlos Linares López Sep 15 '15 at 06:33
  • @D.W. I tried to address your issue by editing the answer once again. This time I just added "simple way". Of course, any recursive function could be implemented iteratively just by simulating the recursion stack. If arguments are recursively solved then you can implement a second iterative procedure which makes that job. Hope you agree that is not a simple way of implementing these functions. In fact, the recurrsive implementation takes just three lines. Again, note I never meant "impossible" but not been a natural way ... – Carlos Linares López Sep 15 '15 at 09:37