0

I want to implement an insertionSort-algorithm. Let's say I have these pseudocode.

function insertionSort(F)
  for i <- 1 to n - 1 do
    j  <- findInsertPosition(F, i, 0, i - 1)
    insert(F, i, j)
  end for
end function

Now I want to implement the findInsertPosition function. Say I give this function my list, which is an array, and change this array into a tree, so that I can search iteratively in this tree for my element.

Would my complexity for the whole code be still maintained in $O(n \log n)$?

Raphael
  • 72,336
  • 29
  • 179
  • 389
Maxim
  • 221
  • 1
  • 9

1 Answers1

1

Would my complexity for the whole code be still maintained in $O(n \log n)$?

Let's check where we're coming from: Insertion Sort runs in time $\Theta(n^2)$ (worst and average case) if you implement findInsertionPosition (fIP) with a simple loop that scans the array.

Second, let's check what we can possible achieve. Assume that fIP runs in time $O(1)$. We still get $\Theta(n^2)$ since insert alone takes times linear in i in each iteration (worst and average case), adding up to $\Theta(n^2)$.

Third, why use a tree? You can easily achieve running-time logarithmic in i for fIP (for a total of something in $\Theta(n \log n)$) by using binary search. That would still be dominated by the cumulative cost of insert, of course.

If you did use a tree, it would make the most sense to not throw it away between iterations; fIP would be obsolete. You would effectively have sorting by creating a search tree (which we don't really call Insertion Sort) but, yes, that would have running-time in $\Theta(n \log n)$, provided you use a balanced search tree.

In summary, no, it does not make sense to try and tweak Insertion Sort by using search trees.

Raphael
  • 72,336
  • 29
  • 179
  • 389