You get to pick your model of computation. If you intend to run your algorithm on a computer, then it is appropriate to pick the model of computation that best fits the way your algorithm will be implemented. For example, in floating point arithmetic, computing $(x/3)^n$ takes $O(1)$, whereas in rational arithmetic you use the repeated squaring algorithm, but since the numbers involved get large, the running time isn't quite $O(\log n)$ (in particular, the end result is $\Omega(n)$ bits long). In practice, if we are computing $(x/3)^n$ exactly, then we're probably using modular arithmetic (i.e., computing everything modulo some number $m$), in which case the running time is $O(\log n \cdot \log m)$ bit operations.
In theoretical work, the most common model is some form of the RAM machine, or possibly the real RAM, which is common in computational geometry. In both cases, there isn't really a well-defined model that everybody agrees on, but in most cases it makes no difference. Analyzing an algorithm is model is important so we can compare the asymptotic running time of different algorithms.