1

I couldn't find a good answer to how this formula was derived for the divide and conquer algorithm in a 1D Peak-Finding problem.

About the problem

Basically, there's an array of numbers and we want to find a peak in this array (a peak is a number higher than the two numbers to the left and right of it).

The formula

$T(n) = T(n/2) + \Theta(1)$

I watched the MIT video on "1. Algorithmic Thinking, Peak Finding" but the formula was just written as though it was something really obvious. I guess it might be, anyone kind enough to explain it?

I can sort of guess that $n/2$ comes from the fact that we are always choosing only one side of the array, but this is really just a guess and I get totally lost when suddenly the formula is expanded and we get

$\Theta(\log_2 n)$

instead...

xskxzr
  • 7,455
  • 5
  • 23
  • 46
huanan_c
  • 119
  • 3

2 Answers2

2

Your guess is absolutely correct. The time taken to process an array of length $n$ is the time taken to choose which half to recurse on, plus the time taken to do the recursion. Doing the recursion takes time $T(n/2)$, since $T(m)$ is, by definition, the time taken to process an array of length $m$. Choosing which half to recurse on takes some constant number of steps, which is what $\Theta(1)$ means. So this gives us that $T(n) = T(n/2) + \Theta(1)$. And, implicitly, $T(1)=\Theta(1)$ since, if the array has length one, there's no recursion to do.

How to solve that recurrence to get $T(n)=\Theta(\log n)$ is covered by our reference question. A rough-and-ready way to see it is to observe that the length of the array halves on each recursive call, so the number of recursions before we reach a single-element array is the answer to the question "How many times can I divide $n$ by two before I get $1$?", which is $\log_2 n$.

David Richerby
  • 81,689
  • 26
  • 141
  • 235
0

I had the same issue when I was watching the first video recently.

I don't know what your background is, but if you already have some basic grounding in discrete mathematics, I suggest you take the formal way to understand it.

That is, take any good basic discrete mathematics textbook and read the section on recurrence relations and algorithm complexity. Personally, I referred to Susanna Epp's Discrete maths textbook (section 5.7, 11.2, 11.3, 11.4, 11.5). Depending on your background, you may need to read more or less sections. It helped me immensely understand lectures on recursive algorithms.

Since "MIT 6.006 Introduction to Algorithms" seems that it's mostly about learning to measure time complexity of various algorithms, I strongly recommend that you grab a discrete maths text and gather mathematical foundations as you go.

I'm currently doing this, and the next sections I need to read from Epp's text are regarding trees, graphs, and discrete probability.

I honestly think that you cannot get much out of 6.006 if you don't study basic discrete mathematics. You will not be able to understand time complexity.