0

So I am studying an introductory chapter to dynamic programming that suggests a general solution to an optimization problem that occurs straightforwardly from expressing the problem with a reccurence equation.

Compute-Opt(j){
    If j =0 then
      Return 0
    Else
      Return max(vj+Compute-Opt(p(j)), Compute-Opt(j −1))
    Endif
}

enter image description here Although, I do get the equation and the recursive tree I do not get how we are able to quickly determine that it takes exponential time.

I can see that computing the max of these two, means that we have to compute 2 subproblems where $$T(n) = T(p(j)) + T(n-1) \leq 2T(n-1)$$ and I know how to solve that with master theorem but I want to grasp intuitively how to conclude that this is exponential.
To my mind: The worst case is always $vj + Compute -Opt(j) \leq Opt(j-1)$ so we take the long path to the base case (is that correct?)
The tree has then height at most $log(j)$ . How do I continue my thought from there (is it in the right direction)?

This is a common theme in many cs books : refer to fibbonacci simplest version of recursive algorithm (with no memoization) to introduce the reader to the idea that: bad recursive algorithms take exponential time (and I can see that we do in fact compute many times the same subproblems) but I never really grasped the mathematics behind it.

tonythestark
  • 211
  • 2
  • 12
  • 1
    What is $p$? Why is there a $n$ and a $j$ in your equation? – Nathaniel Nov 27 '21 at 11:30
  • https://cs.stackexchange.com/q/135815/755, https://cs.stackexchange.com/q/52481/755, https://cs.stackexchange.com/q/2789/755, https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms) – D.W. Nov 27 '21 at 22:55

0 Answers0