You are correct in thinking binary search is $O(\log n)$, it shouldn't be multiplied by $n$.
Popular (comparison-based) sorting algorithms are $O(n \log n)$.
3SUM (i.e. find 3 elements in an array that sums to zero) using binary search is $O(n^2 \log n)$.
The pseudo-code:
For each element
For each other element
Do a binary search for the 3rd element that will result in a zero sum.
Although the problem can be solved in $O(n^2)$ in a different way, this should still serve as a decent example.
Explanation of merge-sort complexity:
Merge-sort, for example, splits the array into 2 parts repeatedly (similar to binary search), but there are some differences:
- Binary search throws away the other half, where merge-sort processes both
- Binary search consists of a simple $O(1)$ check at each point, where-as merge-sort needs to do an $O(n)$ merge. This should already make the $O(\log n)$ vs $O(n \log n)$ make sense.
For a quick check, ask how many work, on average, is done for each element.
Note that a single merge is linear time, thus $O(1)$ per element.
You recurse down $O(\log n)$ times and, at each step there's a merge, so each element is involved in $O(\log n)$ merges.
And there are $O(n)$ elements.
Thus we have a time complexity of $O(n \log n)$.
There is also the more mathematical analysis: (source)
Let $T(n)$ the time used to sort n elements. As we can perform separation and merging in linear time, it takes $cn$ time to perform these two steps, for some constant $c$. So,
$T(n) = 2T(n/2) + cn$.
From here you work your way down to $T(1)$, and the remaining terms gives you your $O(n \log n)$ running time. Or use the master theorem.
If both of these are unclear, it should be easy enough to find another resource explaining the complexity of merge-sort.