3

The algorithms using the "divide and conquer" (wiki) design strategy often have the time complexity of the form $T(n) = aT(n/b) + f(n)$, where $n$ is the problem size. Classic examples are binary search ($T(n) = T(n/2) + O(1)$) and merge sort ($T(n) = 2T(n/2) + O(n)$).

Do you know any algorithms (probably using "divide and conquer") that have the time complexity of the form $T(n) = \sqrt{n} \cdot T(\sqrt{n}) + O(n)$?

David Richerby
  • 81,689
  • 26
  • 141
  • 235
hengxin
  • 9,541
  • 3
  • 36
  • 73

1 Answers1

3

Think of an algorithm, which do something linear with an integer list of length $n$, for example computes the maximum. Afterwards, the algorithm divides the list of length $n$ into $\sqrt{n}$ lists of length $\sqrt{n}$ and starts the algorithm for them. The result of the algorithm is for example the product of the computed maximum and the results of the $\sqrt{n}$ lists. For the base case, a list of length $1$, you can return the value of the only element.

This algorithm has the time complexity, you asked for.

Danny
  • 994
  • 4
  • 10
  • 2
    To give a concrete example, Mergesort could be implemented in this way. In massively parallel settings, this may even make sense. – Raphael Nov 07 '14 at 08:51