10

I've learned PHP, Java, and C. Now I'm curious as to why there are so many types of numerical data types like bit, int, float, double, and long. Why not make only one type for numericals?

Is there any benefit to this? Maybe if we use integers to hold such small numbers we can save memory?

Kilian Foth
  • 109,273
GusDeCooL
  • 212
  • 1
  • 2
  • 7
  • 6
    In addition to HorusKol's answer: 'float' and 'integer' types are inherently different. Floats can hold very large numbers, but as the size of the number goes up, the precision goes down. This imprecision is because of the way floats are stored. By contrast, the range of values you can store in an integer is quite limited, but the value is always exact, so you can compare values much easier. Also, there are two different types of behaviour with division -- integers 'truncate' to the nearest whole number automatically, floats do not. Each of these behaviours are useful for different situations. – kampu May 25 '13 at 05:45
  • Javascript only has one number type on the surface. – Esailija May 25 '13 at 11:18
  • @kampu: Actually, in many languages, integers can store any number as long as the (virtual) memory is big enough to represent it. – Jörg W Mittag May 26 '13 at 00:08
  • 1
    @JörgWMittag: However, that's questioner clearly is talking about static languages, not dynamic languages like Python, for example. CPython itself implements the 'unlimited range' integer as an array of 32bit ints, with the final bit in each int used to indicate if there are more bits to go. Also, integers can store any whole number only. That means that a float with infinite storage can store values to the precision (infinity aleph one), while integers can store values only to precision (infinity aleph zero). – kampu May 26 '13 at 01:12
  • @kampu: Since all numbers are represented by series of bits, even with infinite storage, there will always be a one to one mapping between floating point numbers and integers. So I don't think aleph one comes to question. – COME FROM May 27 '13 at 06:39
  • @COMEFROM: It's true that an N-bit floats don't magically store any more information than an N-bit integer. However, in practice people mostly do not bother with mapping an N-bit piece of data to anything -- they just use the conventional 'float' or 'integer' types, or 'decimal' types, except for very small N(which are usually enumerations). It's in deciding between these that the aleph-X-infinity measure is relevant. – kampu May 27 '13 at 06:58

7 Answers7

18

There are two reasons why you should be concerned with the different numerical data types.

1. Saving memory

for(long k=0;k<=10;k++)
{
    //stuff
}

Why use a long when it could just as easily be an integer, or even a byte? You would indeed save several bytes of memory by doing so.

2. Floating point numbers and integer numbers are stored differently in the computer

Suppose we have the number 22 stored in an integer. The computer stores this number in memory in binary as:

0000 0000 0000 0000 0000 0000 0001 0110

If you're not familiar with the binary number system this can be represented in scientific notation as: 2^0*0+2^1*1+2^2*1+2^3*0+2^4*1+2^5*0+...+2^30*0. The last bit may or may not be used to indicate if the number is negative (depending if the data type is signed or unsigned).

Essentially, it's just a summation of 2^(bit place)*value.

This changes when you are referring to values involving a decimal point. Suppose you have the number 3.75 in decimal. This is referred to as 11.11 in binary. We can represent this as a scientific notation as 2^1*1+2^0*1+2^-1*1+2^-2*1 or, normalized, as 1.111*2^2

The computer can't store that however: it has no explicit method of expressing that binary point (the binary number system version of the decimal point). The computer can only stores 1's and 0's. This is where the floating point data type comes in.

Assuming the sizeof(float) is 4 bytes, then you have a total of 32 bits. The first bit is assigned the "sign bit". There are no unsigned floats or doubles. The next 8 bits are used for the "exponent" and the final 23 bits are used as the "significand" (or sometimes referred to as the mantissa). Using our 3.75 example, our exponent would be 2^1 and our significand would be 1.111.

If the first bit is 1, the number is negative. If not, positive. The exponent is modified by something called "the bias", so we can't simply store "0000 0010" as the exponent. The bias for a single precision floating point number is 127, and the bias for a double precision (this is where the double datatype gets its name) is 1023. The final 23 bits are reserved for the significand. The significand is simply the values to the RIGHT of our binary point.

Our exponent would be the bias (127) + exponent (1) or represented in binary

1000 0000

Our significand would be:

111 0000 0000 0000 0000 0000

Therefore, 3.75 is represented as:

0100 0000 0111 0000 0000 0000 0000 0000

Now, let's look at the number 8 represented as a floating point number and as an integer number:

0100 0001 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 1000

How in the world is the computer going to add 8.0 and 8? Or even multiply them!? The computer (more specifically, x86 computers) have different portions of the CPU that add floating point numbers and integer numbers.

cpmjr123
  • 197
  • 9