Right off the bat, I want to say that I know the How many bits?
question has already been asked many times, in many ways.
I have done my best to comb through the answers given in the previous posts, but there seem to be two conflicting answers. For example, in this post, there is a disagreement on the following:
Given an unsigned integer $n$, what is the minimum number of bits $N_b$ required to represent it?
The first (and marked correct) answer is: $N_b=\lfloor{\log_2(n)+1}\rfloor$
However, users in the comments and answers below claim that the above is wrong, and that the actual answer is: $N_b=\lceil\,\log_2(n+1)\rceil$
When I try to figure it out myself, I come up with the second answer:
Need $N_b$ s.t. $n\leq 2^{N_b}-1$, which implies that $N_b\geq\log_2 (n+1)$. And since we cannot have fractional $N_b$, but know that $\log_2 (n+1)$ is the lowest $N_b$ can be, we must round up to the next integer. Therefore we have $N_b=\lceil\,\log_2(n+1)\rceil$.
Does anyone know where the first answer comes from and/or why there is a discrepancy/disagreement between the two?