The decimal point can "float" to accommodate larger numbers while staying in 32-bits which is why float is considered "in-accurate"
Is this an accurate statement? I want to know if I understand floating point numbers.
The decimal point can "float" to accommodate larger numbers while staying in 32-bits which is why float is considered "in-accurate"
Is this an accurate statement? I want to know if I understand floating point numbers.
As stated, this isn't an accurate statement.
A particular floating point representation can exactly represent some values and cannot accurately represent other values. The same is true for all data types. Numeric data types don't vary in their "accuracy" but rather in the values that they can accurately represent. For instance, a single-byte unsigned integer type can represent 255, but cannot represent 257 or 0.5.
A floating point representation is no different. It can represent some numbers, and cannot represent others. What confuses people is that unlike integer types, values that a floating point representation cannot represent can be between values that it can represent. So, for example, a floating point representation might be able to represent 0.25 and 0.5, but might not be able to represent 1/3rd.
What is true is that with integer types (or fixed decimal types) it is very easy to determine what values can be represented. You know out of hand with an int
that fractional values cannot be represented, nor can values greater than MAX_INT
. With a float, it would require detailed knowledge of the representation to determine whether a value can be exactly represented.
It is very difficult to programmatically test whether the result of a particular mathematical statement will produce a floating point value that cannot be exactly represented. On the other hand, it is trivial to programmatically test whether the result of a particular mathematical statement will produce an integer value that cannot be exactly represented. So people do these sorts of tests all the time with integers (testing for overflows), but just live with the fact that there may be rounding in floating point statements, and keep track of how close the results are to the actual value (the precision.)
Or another way to put it: saying that a given floating point value is inaccurate does not really make sense. Instead, it is better to say that a given set of floating point operations may produce inaccurate results.