Why does quicksort have a big $O$ notation of $(n \log n)$.
I would like some help understanding what exactly $(n \log n)$ is, and then how it applies to quicksort.
Also in $(n \log n)$, what is the base for the $\log$?
Why does quicksort have a big $O$ notation of $(n \log n)$.
I would like some help understanding what exactly $(n \log n)$ is, and then how it applies to quicksort.
Also in $(n \log n)$, what is the base for the $\log$?
The "standard" version of quicksort does not have a worst case time complexity of $O(n \log n)$. In fact, it can even require $\Theta(n^2)$ time. However, quicksort does have an average time complexity of $O(n \log n)$. You can get quicksort to run in $O(n \log n)$ worst-case time if you use a suitable pivot-selection strategy. In particular you want a pivot-selection algorithm that requires at most linear time to find a pivot ensuring that the two recursive calls of quicksort are performed on at least a constant fraction of the input elements. See Median of medians.
The base of the $\log$ doesn't really matter as long as it is a constant greater than $1$. This is because the big-oh notation hides constant multiplicative factors and if you consider two possible bases $a$ and $b$, you have $\log_a n = \frac{\log_b n}{\log_b a}$, where $\log_b a = \Theta(1)$. That said, $\log$ usually refers to the binary logarithm in computer science.