-1

The decimal point can "float" to accommodate larger numbers while staying in 32-bits which is why float is considered "in-accurate"

Is this an accurate statement? I want to know if I understand floating point numbers.

Rhys
  • 29
  • 2
    What do you mean by "inaccurate", and "inaccurate" relative to what? Have you considered the opposite case of accommodating really tiny numbers? – Doval Feb 26 '15 at 21:03
  • Inaccurate when dealing with a real number – Rhys Feb 26 '15 at 21:06
  • 1
    Some real numbers are rather hard to store precisely in numerical form like pi or e that are transcendental you know. – JB King Feb 26 '15 at 22:00
  • 1
    More specifically, most real numbers are impossible to store in a fixed amount of space. – Doval Feb 26 '15 at 22:05
  • 1
    @Doval With limited memory, the same is true of natural numbers and rational numbers and all other infinite sets. However, these two sets are countable, all naturals and rationals can be represented if you allow unbounded (but finite) memory. The reals, however, are uncountable, so even a bona fide Turing machine cannot represent almost all (i.e., all but a countable subset) real numbers, regardless of the encoding chosen. Which real numbers can be represented is highly dependent on said encoding though: pi and e are rather easy to represent as computable numbers. –  Feb 26 '15 at 22:20

1 Answers1

4

As stated, this isn't an accurate statement.

A particular floating point representation can exactly represent some values and cannot accurately represent other values. The same is true for all data types. Numeric data types don't vary in their "accuracy" but rather in the values that they can accurately represent. For instance, a single-byte unsigned integer type can represent 255, but cannot represent 257 or 0.5.

A floating point representation is no different. It can represent some numbers, and cannot represent others. What confuses people is that unlike integer types, values that a floating point representation cannot represent can be between values that it can represent. So, for example, a floating point representation might be able to represent 0.25 and 0.5, but might not be able to represent 1/3rd.

What is true is that with integer types (or fixed decimal types) it is very easy to determine what values can be represented. You know out of hand with an int that fractional values cannot be represented, nor can values greater than MAX_INT. With a float, it would require detailed knowledge of the representation to determine whether a value can be exactly represented.

It is very difficult to programmatically test whether the result of a particular mathematical statement will produce a floating point value that cannot be exactly represented. On the other hand, it is trivial to programmatically test whether the result of a particular mathematical statement will produce an integer value that cannot be exactly represented. So people do these sorts of tests all the time with integers (testing for overflows), but just live with the fact that there may be rounding in floating point statements, and keep track of how close the results are to the actual value (the precision.)

Or another way to put it: saying that a given floating point value is inaccurate does not really make sense. Instead, it is better to say that a given set of floating point operations may produce inaccurate results.

Gort the Robot
  • 14,774
  • 4
  • 51
  • 60
  • "unlike integer types, values that a floating point representation cannot represent can be between values that it can represent." - Well, technically an integer type cannot represent 0.5 even though that's between 0 and 1. I think that you mean that the value range of an integer type is a contiguous subset of mathematical integers, whereas the value range of a floating-point type is not a contiguous subset of the reals. – MSalters Feb 27 '15 at 08:40
  • On PowerPC and the latest Intel processors (everything with FMA) it is quite easy to determine whether multiplication or division gave an exact result. For addition and subtraction, it's easy for any IEEE 754 implementation. – gnasher729 Feb 15 '16 at 21:20