I am currently reading the book of Arora and Barak on computational complexity. In the third chapter (p69-70), two classic theorems regarding time complexity hierarchies are introduced:
$\left[f(n)\log(f(n)) = \mathcal{o}(g(n))\right] \Rightarrow \left[DTIME(f(n) \subset DTIME(g(n))\right]$ $\left[f(n+1) = \mathcal{o}(g(n))\right] \Rightarrow \left[NTIME(f(n) \subset NTIME(g(n))\right]$
The proofs that are given for these theorems make use of (lazy) diagonalization of a universal TM that flips the answer. This presumes we can simulate a TM $M$ that runs in $f(n)$ time using a universal TM $U$ that runs in $g(n)$. For the deterministic case, this is possible, as the overhead of the universal TM is logarithmic (so for large enough $n$, we have $T(n) = f(n)$ and $U(n) = g(n) > f(n)\log(n)$). For the non-deterministic part however, the $\log$ factor is dropped. Could someone explain to me why the $\log$ factor isn't needed in this case?
The proofs of the book don't really help, as they only prove the theorem for the specific case in which $f(n) = n$ and $g(n) = n^{1.5}$.