1

Does anyone of you know how to associate an algorithm execution time (of experimental execution) to a well known complexity time? For example, say a divide and conquer algorithm lasts theoretically O(n·logn·n^2) or Ω(n·logn), in a theoretical way, doing reccurence ecuations and all that stuff. But when you plot the data from a CSV, you can't say just by looking at it 'oh yea this is O(n^2).

The data from the CSV is like:

n, execution_time_for_that_n...

How can I use a regression method to derive O-classes from such data?

dunnomuch
  • 13
  • 3
  • I removed the parts about implementing stuff in Excel and such, since that's offtopic here. – Raphael Apr 07 '17 at 19:18
  • See http://cs.stackexchange.com/q/48505/755, http://cs.stackexchange.com/q/33854/755, http://cs.stackexchange.com/q/857/755, http://cs.stackexchange.com/q/66378/755 for an overview of the topic. – D.W. Apr 07 '17 at 19:24

1 Answers1

3

It's impossible. No finite set of measurements can prove asymptotic bounds.

Proof sketch: Assume you had such a method. Apply it to any algorithm $A$. If your largest measurement is for input size $n_0$, construct a new algorithm $B$ that branches into $A$ for all $n \leq n_0$, and one with a different $\Theta$-runtime class for $n > n_0$. The method fails for $B$.

That said, there are ways to perform reasonable experiments (for average-case performance). I recommend the book "A Guide to Experimental Algorithmics" by C. McGeoch.

Raphael
  • 72,336
  • 29
  • 179
  • 389