Ok I don't know about World of Warcraft or their implementation. Taking average network jitter and delays into account you don't get a very accurate reading about time from the server. (That is why NTP is difficult.)
If all you want to display is the server time, you just need to send the server's time and add the current round trip time to get a rough but around 100ms accurate time. From there you use the systems clock. Sure the systems clock drifts, but we are talking a few seconds a day at worst.
If you are thinking about actually using the servers clock for more accurate time reading or logic, I would strongly discourage you from that. You should use the local system clock as time of reference and work with time deltas based on that.
I am not sure what you mean by reliable clock. Everything above 1s is handled by the system clock with is a separate timekeeping IC which runs of of a quartz, like any other clock. Anything below 1s runs of the CPU ticks and they are very reliable, the CPU would not work if it was not the case. We are talking picosecond accuracy. You get reliable time readings down to 1ms easily. Just so I don't get grilled for it, the system clock IC actually runs more acuratly and the CPU uses that to sync up it's tick rate, wich is a multiple of the system clock tick.
So why does the time slow down while low frame rate? Do you mean simulation time as opposed to real (wall) time? In that case you have other problems to handle, like what happens when different clients have different time dilation. But the way I see it is that the server is the only authorative instance and the client only do prediction. The prediction is off by multiple 100ms anyway through network jitter, so time keeping is the least problem. It also helps to make client prediction and effect simulation frame rate independent. Sure it reduces accuracy, but on the client any simulation should be only for a smother experiences.