Suppose I have a computer that solves a certain problem in an average time T. The average time T is calculated from the probability distribution p(t), where p(t) is the probability that the program halts at time t meaning the problem has been solved.
Now suppose we run n identical computers simultaneously and independent of each other solving the same problem as I just described. I am interested in the new average time that it takes before atleast one of the computers have solved the problem.
I'm not sure how I should tackle this problem.