The "best" time complexity of an algorithm is when it reaches the upper complexity bound for the given problem.
Let us take a trivial example: design an algorithm to count zeroes in a sequence of $N$ numbers.
Obviously, you need to "read" every number to get the answer (because you can't guess a number by reading the others), so that the complexity of any solution is at least $\Omega(N)$. If you are able to design an algorithm that takes $O(N)$ time, then you can stop searching for better solutions (at least in the asymptotic sense), your algorithm is optimal and can't be improved.
Asymptotic complexities are used mostly for mathematical tractability, and for "portability". Because they ignore numerous "premature" details in the analysis, simplify the equations (even though in many cases the developments remain extremely involved), and because they are independent of any architectural specifics of the hardware.
Asymptotic analysis is a powerful tool for algorithm designers to rate the performance of their solutions. Anyway, they are just a valid indication for sufficiently large problem sizes. In practice, nothing replaces benchmarking on representative data sets, and a theoretically poor algorithm can be the real winner.