2

So, i would like to know the time complexity of the following codes:

x = (float) rand() / rand();   // T(4)

while (x >= 0.01)   // T(?)
{
    x *= 0.8;  // T(?) x T(2)
}

Assuming that all the basic operations are perfomed once, is the best case T(1) - constant time? Since, that might only happen when the random x generated is <= 0.01.

What about the average case? Is it T(?) x T(1) / 2?

Thanks a lot!

D.W.
  • 159,275
  • 20
  • 227
  • 470
erwinleonardy
  • 131
  • 1
  • 7
  • 4
  • It all depends on defintion. I think that rand() is bounded by constant number in your case. Then, initial x also is bounded and answer is simple. Ah, well, there is one case when this program does not halt. – rus9384 Jul 13 '17 at 16:13
  • 1
    Please edit your question to define the behavior of rand(), so knowledge of any particular programming language is not needed to be able to answer the question. Is this code fragment in C? Not everyone here may know C. We want questions to be language-independent and to be understandable even to people who don't know a particular programming language. Thank you! – D.W. Jul 13 '17 at 17:41
  • Welcome to Computer Science! What have you tried? Where did you get stuck? We do not want to just hand you the solution; we want you to gain understanding. However, as it is we do not know what your underlying problem is, so we can not begin to help. See here for tips on asking questions about exercise problems. If you are uncertain how to improve your question, why not ask around in [chat]? – Raphael Jul 13 '17 at 18:09
  • @rus9384 I have edited the definition of rand() – erwinleonardy Jul 14 '17 at 02:18
  • @D.W. I have changed the definition of rand() to something more generic. – erwinleonardy Jul 14 '17 at 02:18
  • @Raphael I was the one who wrote the code. I just want to know what is the time complexity of a while loop when the value that is checked in the while condition is randomly generated. – erwinleonardy Jul 14 '17 at 02:20
  • The latest edit changes the question in a fundamental way that invalidates prior answers. This doesn't seem like an improvement. In particular, it's not possible to compute the average case running time without knowing the distribution of x. – D.W. Jul 14 '17 at 05:38
  • @D.W. OK. If that is the case, am i still able to calculate the time complexity of the while loop or T(?) in this case? – erwinleonardy Jul 14 '17 at 05:53
  • Well, this problem is undecidable in general. But if you would put x = (rand() + 1) / (rand() + 1), it would have $O(1)$ complexity. – rus9384 Jul 14 '17 at 07:50

3 Answers3

8

Worst case, if the second call to rand() returns 0 and the first call doesn't, you get a floating point division by zero, and if you are using standard IEEE 754 arithmetic, the result is +infinity. In that case, the loop will run forever.

If you changed your code to exclude that case, and exclude the case that rand () might return a 128 bit integer, then for any implementation the size of x is limited, so the number of multiplications by 0.8 is limited, so the runtime is O (1).

gnasher729
  • 29,996
  • 34
  • 54
  • Assuming that the random number is changed to ((float) rand() / rand() + 0.1); Therefore, there is no dividing by zero exception. – erwinleonardy Jul 14 '17 at 02:06
  • Why is the runtime O(1)? I thought that the complexity will grow exponentially if n is a really huge number? I was guessing that it has a O(n^2) or even worse. – erwinleonardy Jul 14 '17 at 02:08
  • @erwinleonardy because x has an upper bound. – Taemyr Jul 14 '17 at 06:56
6

This answer refers to a version of the question in which $x$ is sampled by dividing two random numbers.

As mentioned by Rick Decker's answer, given $x$, we can approximate the running time by $O(\max(\log x,1))$. Assuming that rand returns a random number in $[0,1]$, the running time should be proportional (up to an additive constant) to $$ \int_0^1 \int_0^1 \max(\log \tfrac{x}{y},0) \, dx \, dy. $$ Let us start by computing the inner integral: $$ \int_0^1 \max(\log \tfrac{x}{y},0) \, dx = \int_y^1 \log \tfrac{x}{y} \, dx = (1-y)\log y + \int_y^1 \log x \, dx = \\ -(1-y)\log y + \left. x(\log x-1) \right|_y^1 = -(1-y)\log y-1-y(\log y-1) = \\ y-\log y-1. $$ Integrating this over $y$, we get $$ \int_0^1 (y-\log y-1) \, dy = \left. \tfrac{1}{2} y^2 - y\log y \right|_0^1 = \frac{1}{2}. $$ This shows that in this idealized setting, the expected running time is $O(1)$ (rather than infinity, which could also have been the case).

Yuval Filmus
  • 276,994
  • 27
  • 311
  • 503
  • His question is about average time. It is possible that x is generated as $\infty$ and then the program does not halt. But in all (constant amount) other cases the runtime is constant. Then average time is also constant (while worst case is $\infty$)? And in his code rand() returns integers in some interval, I think. – rus9384 Jul 13 '17 at 16:55
  • 1
    Even if rand returns an integer from $1$ to $N$ (or from $0$ to $N$), in the large $N$ limit we get the same result, since rand/rand is the same as (rand/N)/(rand/N), and rand/N converges in the large $N$ limit to the uniform distribution over $[0,1]$. – Yuval Filmus Jul 13 '17 at 17:04
  • 1
    I think the assumption in your answer is wrong (in C): in C rand() returns an integer in some range. Thus there is a non-zero probability it returns 0, and there is a non-zero probability that x is infinity and the loop never terminates. If a discrete random variable has a non-zero probability of being infinity, then its expected value is infinite. So I think gnasher729 is right, and technically the average running time is infinite. – D.W. Jul 13 '17 at 17:39
  • @D.W. That's what I asked in comment. So, if there is only one of thousands cases that never halts, average runtime is infinite? – rus9384 Jul 13 '17 at 18:34
  • @D.W. Actually, if $y$ is 0 then there is a division-by-zero exception (there's another exception if $x=0$). I think that the interesting way to play this game is to take the limit $N\to\infty$. – Yuval Filmus Jul 13 '17 at 21:33
  • @YuvalFilmus, I don't think that's right. It's right for integer division, but I think floating point division by zero doesn't trigger an exception; it just gives the result +infinity. (I think. I tried it, and that's what seems to happen on my computer.) I do agree your answer is interesting and useful; I think the problem is that the question isn't as clearly defined as it could be. – D.W. Jul 13 '17 at 21:43
  • Sorry for the ambiguous question guys! I have redefined the definition of rand() into something more generic. – erwinleonardy Jul 14 '17 at 02:32
  • @YuvalFilmus So, the best-case and average case would also be a constant time? I still don't get it. Why does it have a O(1)? As x gets bigger, the number of basic steps will also proliferate. Why isn't it at least O(n^2)? Sorry for being really stupid. – erwinleonardy Jul 14 '17 at 02:39
  • What is $n$ in your case? Usually it's the input length, but in your case there's no input. – Yuval Filmus Jul 14 '17 at 04:42
  • @YuvalFilmus Isn't x considered as the input for the while loop? Can we consider x as n here? I am just wondering what's the time complexity of the while loop. O(1) doesn't make much sense to me. Since, there should be more computation when x=100 compared to x=0.01. Thanks!! – erwinleonardy Jul 14 '17 at 05:49
  • If you want to express the running time as a function of $x$, then $x$ should be your variable. – Yuval Filmus Jul 14 '17 at 06:02
  • Ah i see. Thank you. But why isn't it O(logn) or O(n)? Why is it constant time? I believe the number of computation depends on x?@YuvalFilmus – erwinleonardy Jul 14 '17 at 13:59
  • I was analyzing the average time complexity, which doesn't depend on anything. – Yuval Filmus Jul 14 '17 at 14:08
2

In general, the time taken by this snippet is mainly governed by how many times the loop iterates. In other words, how many times will you need to multiply $x$ by $0.8$ to get a result less than $0.01$?

In other words, for a fixed $x$, what $n$ value will ensure that $(0.8)^nx<0.01$?

Rick Decker
  • 14,826
  • 5
  • 42
  • 54