5

I'm wondering if it's possible for algorithms that have monotonically decreasing runtime with the input-size - just as a fun mental exercise. If not, is it possible to disprove this claim? I haven't been able to come up with an example or counterexample so far, and this sounds like an interesting problem.

P.S. Something like $O(\frac{1}{n})$, I guess (if it exists)

stoic-santiago
  • 413
  • 3
  • 7

4 Answers4

10

Try brute force searching of a key for a cryptographic algorithm. The more of the key you give it to start with, the less you have to search for. True that trend stops at the limit of keysize (but that's still monotonic), and there are probably other examples in the field of extensive search where the more input data, the easier it is to prune branches of the potential tree.

cinut
  • 101
  • 2
8

Well an algorithm with $O(0)$ fulfills the criterion. It basically does nothing. As soon as your algorithm does at least one operation on execution it has a runtime cost $t(n) > 0$. Since $$t(n)\in O(1/n) \Leftrightarrow \exists c,n_0\forall n >n_0: t(n) \leq c\cdot\frac 1 n$$ An algorithm with constant runtime doesn't have runtime $O(1/n)$. This means that for a runtime measure where every operation costs at least $1$ only the empty algo has runtime $O(1/n)$ but if you e.g. say that an if-stmt with the check of an condition has cost zero you can build algorithms whos runtime cost is 0 after a certain input is reached e.g.:

def algo(n):
  if n < 100:
    do something very expensive

This algo is if you declare condition checking as 0 cost operation an algorithm with runtime $O(0)$ and thus also runtime $O(1/n)$ even though it could do a very expensive operation for the first hundred values.

Generally a decreasing complexity is rather senseless because you can always express it as either $O(1)$ or $O(0)$. (e.g. $O(1/n+10) = O(1)$).

plshelp
  • 1,619
  • 5
  • 14
  • 1
    This is a very confusingly worded answer, and multiple false claims, such as "An algorithm with constant runtime doesn't have runtime O(1).", and " O(1/n)=O(0)") – Acccumulation Sep 06 '20 at 23:25
1

Just to mention something in addition to the other (correct) answers: Such complexities can arise when the runtime of the algorithm depend on more than just one parameter / if one does not care about the input size. For example, searching the minimum in $n$ elements is clearly in $O(n)$, however, if you do this in parallel using $p$ processors, the complexity is in $O(\frac{n}{p} + \log{p})$.

Other instances this phenomen can arise are when you have a precision parameter $\epsilon$ present, which you usually want to have as close to 0 as possible. In such cases, $O(\epsilon^{-1})$ is actually "worse" than $O(\epsilon)$ because you want the error to be as small as possible.

Fabian
  • 21
  • 2
0

Just reading in the (full) input is $O(n)$ for input of size $n$.

vonbrand
  • 14,004
  • 3
  • 40
  • 50
  • 1
    Just reading the first bit of input is O(1) – Oscar Smith Sep 06 '20 at 03:17
  • 8
    The complexity of a function which takes the input as a parameter will not include the time taken to read the input. Case in point: binary search is considered to be O(log n) rather than O(n). – Bernhard Barker Sep 06 '20 at 03:39
  • 1
    @BernhardBarker well, that’s only because the parameter is a pointer. – Dmitri Urbanowicz Sep 06 '20 at 07:39
  • 1
    I think it depends on computational model. TM implementation of binary search would be something closer to $O(N^2)$ as seeking is done in $O(n)$. In RAM model it is $O(log n)$. I think usually RAM model is used for discussing complexity. – Maja Piechotka Sep 06 '20 at 23:03
  • None of this discussion of $O(n)$ makes sense without defining what $n$ is (is it the number of element in an array, or the number of bits of the pointer that points to the array?), and as Maciej pointed out, what the computation model is. The true cost of a pointer de-reference is incredibly complicated on modern CPUs, because of the many, many different caches involved, and the prediction algorithm involved in pre-fetching memory. – Alexander Sep 07 '20 at 00:20
  • @Alexander-ReinstateMonica That's tangential but definition I was thought is that $n$ is size of input. For example $n$ is the value for base-1 encoding but it is $log_k n$ for base $k$, for binary search it is array + element, etc. – Maja Piechotka Sep 07 '20 at 09:18