I'm trying to understand the Double-precision floating-point format:
As I understand so far, a floating-point number is of the form $$ (-1)^s2^{c-1023}(1+f) $$ where $s=0,1$ is the sign indicator, $c$ is the 11-bit exponent, $f$ is the 52-bit binary fraction.
Here is my question:
What is the largest floating point number?
I get an answer in Burden and Faires's Numerical Analysis, which I don't understand:
... the largest has $s=0$, $c=2046$, and $f=1-2^{-52}$ and is equivalent to $$ 2^{1023}(1+1-2^{-52}) $$
I don't see why $c=2046$. The exponent of 11 binary digits gives a range of $0$ to $2^{11}-1=2047$. Why $c$ is not $2047$?