The misunderstanding of floating point arithmetic and its short-comings is a major cause of surprise and confusion in programming (consider the number of questions on Stack Overflow pertaining to "numbers not adding correctly"). Considering many programmers have yet to understand its implications, it has the potential to introduce many subtle bugs (especially into financial software). What can programming languages do to avoid its pitfalls for those that are unfamiliar with the concepts, while still offering its speed when accuracy is not critical for those that do understand the concepts?
-
27The only thing a programming language can do to avoid the pitfalls of floating-point processing is to ban it. Note that this includes base-10 floating-point as well, which is just as inaccurate in general, except that financial applications are pre-adapted to it. – David Thornley Mar 28 '11 at 16:46
-
4This is what "Numerical Analysis" is for. Learn how to minimize precision loss - aka floating point pitfalls. – Jul 12 '12 at 23:00
-
A good example of a floating point issue: http://stackoverflow.com/questions/10303762/0-0-0-0-0 – Austin Henley Aug 01 '12 at 13:14
16 Answers
You say "especially for financial software", which brings up one of my pet peeves: money is not a float, it's an int.
Sure, it looks like a float. It has a decimal point in there. But that's just because you're used to units that confuse the issue. Money always comes in integer quantities. In America, it's cents. (In certain contexts I think it can be mills, but ignore that for now.)
So when you say $1.23, that's really 123 cents. Always, always, always do your math in those terms, and you will be fine. For more information, see:
- Martin Fowler's Quantity and Money patterns
- His books Patterns of Enterprise Application Architecture and Analysis Patterns
- Wikipedia on Banker's Rounding
Answering the question directly, programming languages should just include a Money type as a reasonable primitive.
update
Ok, I should have only said "always" twice, rather than three times. Money is indeed always an int; those who think otherwise are welcome to try sending me 0.3 cents and showing me the result on your bank statement. But as commenters point out, there are rare exceptions when you need to do floating point math on money-like numbers. E.g., certain kinds of prices or interest calculations. Even then, those should be treated like exceptions. Money comes in and goes out as integer quantities, so the closer your system hews to that, the saner it will be.

- 3,060
-
3It's not an int... it's a Decimal... making it an "int" and writing all the parsing and formatting code yourself is unnecessary... Decimal does that for you – JoelFan Mar 28 '11 at 21:40
-
20@JoelFan: you're mistaking a concept for a platform specific implementation. – whatsisname Mar 28 '11 at 21:54
-
13It's not quite that simple. Interest calculations, among others, do produce fractional cents, and have to be rounded at some point according to a specified method. – kevin cline Mar 28 '11 at 23:16
-
@kevin - That is true. I would imagine over many transactions truncating the fractional portion would lead to a considerable loss. – ChaosPandion Mar 28 '11 at 23:22
-
26Fictional -1, since I lack the rep for a downvote :) ...This might be correct for whatever's in your wallet but there are plenty of accounting situations where you could well be dealing with tenths of a cent, or smaller fractions.
Decimal
is the only sane system for dealing with this, and your comment "ignore that for now" is the harbinger of doom for programmers everywhere :P – detly Mar 29 '11 at 01:18 -
10@kevin cline: There are fractional cents in calculations, but there are conventions on how to handle them. The goal for financial calculations is not mathematical correctness, but getting the exact same results that a banker with a calculator would. – David Thornley Mar 29 '11 at 13:42
-
1Technically even fractional cents don't matter in an integer model if you know the required degree of precision. Just sayin' but I'm not sayin'. – Rig Jul 12 '12 at 16:46
-
7Everything will be perfect by replacing the word "integer" with "rational"- – Emilio Garavaglia Jul 12 '12 at 16:48
-
3Accounting software that can be defeated by, say, buying 1000 nails for a dollar and transferring them between cost centres one by one... is not very good accounting software. – detly Jul 13 '12 at 05:12
-
The fractional cents are also valued when you go up in size. If you're talking bond sizes of millions (not that uncommon at banks) the fractional interest matters quite a bit. – MathAttack Aug 02 '12 at 03:05
-
It is not true that money is not a float, it depends on the application. For accounting, it is very wrong to use floats for money and exact arithmetic should be preferred (integers are a poor choice as well, because of FX). For risk-management purposes, using floats to represent money is the only way to go. – Michaël Le Barbier Jul 13 '14 at 12:18
-
With financial software, you have to calculate exchange rates, inventory ratios, and many other decimal based equations. For the integer argument, say 2 pencils cost 1 cent, and 1 pencil was used. What amount in currency was used if currency was an integer? If it was stored as an integer value the result would lose accuracy. For all intents and purpose the Decimal type in many languages and database systems is viable, otherwise store the result as a string and utilize precision based math operators. – Will B. Aug 27 '14 at 20:41
-
Although currency is a discrete quantity when represented physically, it becomes continuous when represented as a value (paper, software, etc). This results in some rounding error when converted back into a physical representation. Ideally, the unit of currency would always be that of the smallest currency unit and only be divisible up to a few points of precision - fixed-point decimals, arbitrary word-length - but beyond simple record-keeping, other types may become necessary – Aaron3468 Aug 11 '16 at 09:33
-
I wish I could upvote this twice. Misuse of floating point is a massive pet peeve of mine. I once worked with a commercial accounting system that used them and (from time to time) went out of kilter by a penny which needed to get posted to an error account. Totally unnecessary because some genius decided to store monetary amounts in the database as double! I'd also add that this applies to applications 'handling' money. If you're modeling the economy then implement GDP as a double but if you've got an actual, realizable amount of money use an integral (or built in exact decimal representation) – Persixty Apr 19 '17 at 09:31
Providing support for a Decimal type helps in many cases. Many languages have a decimal type, but they are underused.
Understanding the approximation that occurs when working with representation of real numbers is important. Using both decimal and floating point types 9 * (1/9) != 1
is a correct statement. When constants a optimizer may optimize the calculation so that it is correct.
Providing an approximates operator would help. However, such comparisons are problematic. Note that .9999 trillion dollars is approximately equal to 1 trillion dollars. Could you please deposit the difference in my bank account?

- 6,270
-
2
0.9999...
trillion dollars is precisely equal to 1 trillion dollars actually. – JUST MY correct OPINION Mar 29 '11 at 07:00 -
5@JUST: Yes but I haven't encountered any computers with registers that will hold
0.99999...
. They all truncate at some point resulting in an inequality.0.9999
is equal enough for engineering. For financial purposes it isn't. – BillThor Mar 29 '11 at 14:08 -
2But what kind of system used trillions of dollars as the base unit instead of ones of dollars? – Brad Aug 01 '12 at 18:23
-
@Brad Try calculateing (1 Trillion / 3) * 3 on your calculator. What value do you get? – BillThor Aug 04 '12 at 13:07
We were told what to do in the first year ( sophomore) lecture in computer science when I went to university , ( this course was a pre-requisite for most science courses as well)
I recall the lecturer saying "Floating point numbers are approximations. Use integer types for money. Use FORTRAN or other language with BCD numbers for accurate computation." ( and then he pointed out the approximation, using that classic example of 0.2 impossible to represent accurately in binary floating point). This also turned up that week in the laboratory exercises.
Same lecture : "If you must get more accuracy from floating point, sort your terms. Add small numbers together, not to big numbers." That stuck in my mind.
A few years ago I had a some spherical geometry that needed to be very accurate, and still fast. 80 bit double on PC's was not cutting it, so I added some types to the program that sorted terms before performing commutative operations. Problem solved.
Before you complain about the quality of the guitar, learn to play.
I had a co-worker four years ago who'd worked for JPL. He expressed disbelief that we used FORTRAN for some things. (We needed super accurate numerical simulations calculated offline.) "We replaced all that FORTRAN with C++" he said proudly. I stopped wondering why they missed a planet.

- 3,553
- 1
- 22
- 26
-
2+1 the right tool for the right job. Although I don't actually use FORTRAN. Thankfully neither do I work on our financial systems at work. – James Khoury Aug 02 '12 at 00:32
-
"If you must get more accuracy from floating point, sort your terms. Add small numbers together, not to big numbers." Any sample on this? – mamcx Apr 25 '14 at 21:11
-
@mamcx Imagine a decimal floating point number having just one digit of precission. The computation
1.0 + 0.1 + ... + 0.1
(repeated 10 times) returns1.0
as every intermediate result gets rounded. Doing it the other way round, you get intermediate results of0.2
,0.3
, ...,1.0
and finally2.0
. This is an extreme example, but with realistic floating point numbers, similar problems happen. The base idea is that adding numbers similar in size leads to the smallest error. Start with the smallest numbers as their sum is bigger and therefore better suited for addition to bigger ones. – maaartinus Jun 15 '17 at 19:25 -
Floating point stuff in Fortran and C++ is going to be mostly identical though. Both are accurate and offline, and I'm pretty sure Fortran has no native BCD reals... – Mark Mar 31 '18 at 17:53
Warning: The floating-point type System.Double lacks the precision for direct equality testing.
double x = CalculateX();
if (x == 0.1)
{
// ............
}
I don't believe anything can or should be done at a language level.

- 6,313
-
1I haven't used a float or double in a long time, so I'm curious. Is that an actual existing compiler warning, or just one you'd like to see? – Karl Bielefeldt Mar 28 '11 at 18:00
-
1@Karl - Personally I haven't seen it or need it but I imagine it could be useful to dedicated but green developers. – ChaosPandion Mar 28 '11 at 19:28
-
1The binary floating point types are no better or worse qualitatively than
Decimal
when it comes to equality testing. The difference between1.0m/7.0m*7.0m
and1.0m
may be many orders of magnitude less than the difference between1.0/7.0*7.0
, but it's not zero. – supercat Jul 12 '12 at 22:11 -
depending on what language/framework you are referring to (.Net?) System.decimal is not a fixed point type – jk. Aug 01 '12 at 07:20
-
1.0+1.0==2.0
is a well specified exact equality comparison which results intrue
– Patrick Aug 01 '12 at 14:11 -
1@Patrick - I'm not sure what you are getting at. There is a huge difference between something being true for one case and being true for all cases. – ChaosPandion Aug 01 '12 at 14:15
-
@ChaosPandion: Historically,
double
has often been used to precisely manipulate whole-number quantities which were larger than the largest available integer types. In a language without 64-bit integers, a suitably-scaleddouble
was the most practical type for storing monetary quantities (anunsigned int
would be limited to about $42 million with penny accuracy; adouble
could handle $22 trillion with penny accuracy. – supercat Aug 01 '12 at 15:44 -
1@ChaosPandion The problem with the example in this post isn't the equality-comparison, it's the floating-point literal. There is no float with the exact value 1.0/10. Floating point maths results in 100% accurate results when computing with integer numbers fitting within the mantissa. – Patrick Aug 01 '12 at 20:01
By default, languages should use arbitrary-precision rationals for non-integer numbers.
Those who need to optimize can always ask for floats. Using them as a default made sense in C and other systems programming languages, but not in most languages popular today.

- 2,551
- 21
- 11
-
1
-
3
-
-
Or you do it analytically, such as the way Matlab or Mathematica work. – Thomas Eding Jul 12 '12 at 22:58
-
Irrational numbers are dealt with in the same way as they currently are for floats --- by rounding. – redjamjar Jul 31 '12 at 21:07
-
1I have to say I think this makes a lot of sense, most people who need exact numbers need rationals not irrationals (science and engineering can use irrationals but you are then back into approximate realm again, or you are doing some quite specialized pure maths) – jk. Aug 01 '12 at 07:25
-
1Computations with arbitrary-precision rationals will often be orders of magnitude slower (possibly MANY orders of magnitude slower) than computations with a hardware-supported
double
. If a calculation needs to be accurate to a part per million, it's better to spend a microsecond computing it to within a few parts per billion, than to spend a second computing it absolutely precisely. – supercat Aug 01 '12 at 15:54 -
5@supercat: What you're suggesting is just a poster-child of premature optimisation. The current situation is that the vast majority of programmers have no need whatsoever for fast math, and then get bitten by hard to understand floating-point (mis)behaviour, so that the relatively tiny number of programmers who need fast math gets it without having to type a single extra character. This made sense in the seventies, now it's just nonsense. The default should be safe. Those who need fast should ask for it. – Waquo Aug 01 '12 at 19:09
-
1@Waquo: You vastly underestimate the cost of arbitrary-precision rational arithmetic. Since repeated operations upon fractions will generally denominators to increase, use of unlimited-precision rational types can easily turn an algorithm which should run in linear time and constant space, into one which takes exponential time and exponential space. That doesn't sound very "safe" to me. – supercat Aug 01 '12 at 21:40
-
1@dsimcha: Incidentally, there's a useful set of numbers, "between" the rationals and the reals, of the form a/sqrt(b), with
a
being any integer andb
being a positive integer. One cannot compute a square root of numbers in that form, but given two or more such numbers on can compute the "distance" function sqrt(XX + YY) [e.g. for the numbers a/sqrt(b) and c/sqrt(d)), the result would be [(a^2d+c^2b)/sqrt((ad)^2b+(bc)^2d)]. That could be an interesting numeric type for some geometric or vector calculations, though iterated calculations could be expensive. – supercat Aug 02 '12 at 16:13
I find it strange that nobody has pointed out the Lisp family's rational number trick.
Seriously, open sbcl, and do this:
(+ 1 3)
and you get 4.
If you do*( 3 2)
you get 6.
Now try (/ 5 3)
and you get 5/3, or 5 thirds.
That should help somewhat in some situations, shouldn't it?

- 309
-
I wonder, if possible to know if a result need to be represented as 1/3 or could be a exact decimal? – mamcx Apr 25 '14 at 21:10
-
The two biggest problems involving floating point numbers are:
- inconsistent units applied to the calculations (note this also affects integer arithmetic in the same way)
- failure to understand that FP numbers are an approximation and how to intelligently deal with rounding.
The first type of failure can only be remedied by providing a composite type that includes value and unit information. For example, a length
or area
value that incorporates the unit (meters or square meters or feet and square feet respectively). Otherwise you have to be diligent about always working with one type of unit of measurement and only converting to another when we share the answer with a human.
The second type of failure is a conceptual failure. The failures manifest themselves when people think of them as absolute numbers. It affects equality operations, cumulative rounding errors, etc. For example, it may be correct that for one system two measurements are equivalent within a certain margin of error. I.e. .999 and 1.001 are roughly the same as 1.0 when you don't care about differences that are smaller than +/- .1. However, not all systems are that lenient.
If there is any language level facility needed, then I would call it equality precision. In NUnit, JUnit, and similarly constructed testing frameworks you can control the precision that is considered correct. For example:
Assert.That(.999, Is.EqualTo(1.001).Within(10).Percent);
// -- or --
Assert.That(.999, Is.EqualTo(1.001).Within(.1));
If, for example, C# or Java were altered to include a precision operator, it might look something like this:
if(.999 == 1.001 within .1) { /* do something */ }
However, if you supply a feature like that, you also have to consider the case where equality is good if the +/- sides are not the same. For example, +1/-10 would consider two numbers equivalent if one of them was within 1 more, or 10 less than the first number. To handle this case, you might need to add a range
keyword as well:
if(.999 == 1.001 within range(.001, -.1)) { /* do something */ }

- 46,104
-
2I'd switch the order. The conceptual problem is pervasive. The units conversion issue is relatively minor by comparison. – S.Lott Mar 28 '11 at 16:58
-
I like the concept of a precision operator but as you mention further on it would definitely need to be well thought out. Personally I would be more inclined to see it as its own complete syntactical construct. – ChaosPandion Mar 28 '11 at 16:59
-
-
@Michael - Well that depends on the language or course. If this were say JavaScript I would be inclined to support a precision operator. (As long as the operator was much shorter than an equivalent call to a library.) – ChaosPandion Mar 28 '11 at 17:15
-
I'd rather see a pragma that defined the precision, and have that used whenever floating-point equality was checked. That way, you don't bulk up the source with an additional operator, and you get consistent handling. – TMN Mar 28 '11 at 17:53
-
@TMN, there are two problems with that. First, not all languages have
pragma
capabilities, and not all algorithms need the same precision. I'm working on an application right now, where we need greater precision for our spacial calculations than we do for our performance evaluation calculations. – Berin Loritsch Mar 28 '11 at 18:09 -
@Berin Loritsch: I assumed since we were talking about adding operators to a language, we could consider adding compiler directives as well. I also assumed the directive would apply per-module, and could be redefined at any point in the module (so you could specify tighter precision in some spots, and relax it in others). Mostly, I was just throwing it out as an alternative to a precision operator, I didn't expect the Spanish Specquisition! :) – TMN Mar 28 '11 at 18:28
-
Nothing of the sort. I didn't know what your assumptions were. I try not to make assumptions and take what I see as the entirety of the message. I'm working on my clairvoyance, but I'm struggling a bit. :) – Berin Loritsch Mar 28 '11 at 19:18
-
@TMN: Precision informations needs to be attached to individual values, not to the program as a whole. Perhaps languages could have a
Measurement
type implemented as a (value, error, units) triple. – dan04 Mar 29 '11 at 00:05 -
1@dan04: I was thinking more in terms of "all calculations accurate to within one percent" or the like. I've seen the tar-pit that is unit of measure handling and I'm staying well away. – TMN Mar 29 '11 at 12:05
-
@Berin: unfortunately the precision itself is difficult. As you pointed out with the
Assert.That
example, it should be expressed in percentage, not absolute value. – Matthieu M. Mar 31 '11 at 18:20 -
1About 25 years ago, I saw a numeric package featuring a type consisting of a pair of floating-point numbers representing the maximum and minimum possible values for a quantity. As numbers passed through calculations, the difference between maximum and minimum would grow. Effectively, this provided a means of knowing how much real precision was present in a calculated value. – supercat Jul 12 '12 at 22:18
One thing I would like to see would be a recognition that double
to float
should be regarded as a widening conversion, while float
to double
is narrowing(*). That may seem counter-intuitive, but consider what the types actually mean:
- 0.1f means "13,421,773.5/134,217,728, plus or minus 1/268,435,456 or so".
- 0.1 really means 3,602,879,701,896,397/36,028,797,018,963,968, plus or minus 1/72,057,594,037,927,936 or so"
If one has a double
which holds the best representation of the quantity "one-tenth" and converts it to float
, the result will be "13,421,773.5/134,217,728, plus or minus 1/268,435,456 or so", which is a correct description of the value.
By contrast, if one has a float
which holds the best representation of the quantity "one-tenth" and converts it to double
, the result will be "13,421,773.5/134,217,728, plus or minus 1/72,057,594,037,927,936 or so"--a level of implied accuracy which is wrong by a factor of over 53 million.
Although the IEEE-744 standard requires that floating-point maths be performed as though every floating-point number represents the exact numerical quantity precisely at the center of its range, that should not be taken to imply that floating-point values actually represent those exact numerical quantities. Rather, the requirement that the values be assumed to be at the center of their ranges stems from three facts: (1) calculations must be performed as though the operands have some particular precise values; (2) consistent and documented assumptions are more helpful than inconsistent or undocumented ones; (3) if one is going to make a consistent assumption, no other consistent assumption is apt to be better than assuming a quantity represents the center of its range.
Incidentally, I remember some 25 years or so ago, someone came up with a numerical package for C which used "range types", each consisting of a pair of 128-bit floats; all calculations would be done in such fashion as to compute the minimum and maximum possible value for each result. If one performed a big long iterative calculation and came up with a value of [12.53401391134 12.53902812673], one could be confident that while many digits of precision were lost to rounding errors, the result could still be reasonably expressed as 12.54 (and it wasn't really 12.9 or 53.2). I'm surprised I haven't seen any support for such types in any mainstream languages, especially since they would seem a good fit with math units that can operate on multiple values in parallel.
(*)In practice, it's often helpful to use double-precision values to hold intermediate computations when working with single-precision numbers, so having to use a typecast for all such operations could be annoying. Languages could help by having a "fuzzy double" type, which would perform computations as double, and could be freely cast to and from single; this would be especially helpful if functions which take parameters of type double
and return double
could be marked so that they would automatically generate an overload which accepts and returns "fuzzy double" instead.

- 8,445
- 23
- 28
One thing languages could do--remove the equality comparison from floating point types other than a direct comparison to the NAN values.
Equality testing would only exist is as function call that took the two values and a delta, or for languages like C# that allow types to have methods an EqualsTo that takes the other value and the delta.

- 3,391
What can programming languages do? Don't know if there's one answer to that question, because anything the compiler/interpreter does on the programmer's behalf to make his/her life easier usually works against performance, clarity, and readability. I think both the C++ way (pay only for what you need) and the Perl way (principle of least surprise) are both valid, but it depends on the application.
Programmers still need to work with the language and understand how it handles floating points, because if they don't, they'll make assumptions, and one day the perscribed behavior won't match up with their assumptions.
My take on what the programmer needs to know:
- What floating-point types are available on the system and in the language
- What type is needed
- How to express the intentions of what type is needed in the code
- How to correctly take advantage of any automatic type promotion to balance clarity and efficiency while maintaining correctness

- 1,037
I agree there's nothing to do at the language level. Programmers must understand that computers are discrete and limited, and that many of the mathematical concepts represented in them are only approximations.
Never mind floating point. One has to understand that half of the bit patterns are used for negative numbers and that 2^64 is actually quite small to avoid typical problems with integer arithmetic.

- 2,283
-
disagree, most language currently give too much support for binary floating point types (why is == even defined for floats?) and not enough support for rationals or decimals – jk. Aug 01 '12 at 07:29
-
@jk: Even if the result of any computation would never be guaranteed equal to the result of any other computation, equality comparison would still be useful for the case where the same value gets assigned to two variables (though the equality rules commonly implemented are perhaps too loose, since
x
==y
does not imply that performing a computation onx
will yield the same result as performing the same computation ony
). – supercat Aug 01 '12 at 15:57 -
@supercat you still need comparison, but i'd rather the language required me to specify a tolerance for each floating point comparison, i can then still get back to equality by choosing tolerance = 0, but i'm at least forced to make that choice – jk. Aug 01 '12 at 17:35
What can programming languages do to avoid [floating point] pitfalls...?
Use sensible defaults, e.g. built-in support for decmials.
Groovy does this quite nicely, although with a bit of effort you can still write code to introduce floating point imprecision.

- 6,508
If more programming languages took a page from databases and allowed developers to specify the length and precision of their numeric data types, they could substantially reduce the probability of floating point related errors. If a language allowed a developer to declare a variable as a Float(2), indicating that they needed a floating point number with two decimal digits of precision, it could perform mathematical operations much more safely. If it did so by representing the variable as an integer internally and dividing by 100 before exposing the value, it could improve speed by using the faster integer arithmetic paths. The semantics of a Float(2) would also let developers avoid the constant need to round data before outputting it since a Float(2) would inherently round data to two decimal points.
Of course, you'd need to allow a developer to ask for a maximum-precision floating point value when the developer needs to have that precision. And you would introduce problems where slightly different expressions of the same mathematical operation produce potentially different results because of intermediate rounding operations when developers don't carry enough precision in their variables. But at least in the database world, that doesn't seem to be too big a deal. Most people aren't doing the sorts of scientific calculations that require lots of precision in intermediate results.

- 12,711
-
Specifying length and precision would do very little that is useful. Having fixed-point base 10 would be useful for financial processing, which would remove much of the surprise people get from floating-point. – David Thornley Mar 28 '11 at 17:43
-
@David - Perhaps I'm missing something but how is a fixed-point base 10 data type different than what I'm proposing here? A Float(2) in my example would have a fixed 2 decimal digits and would automatically round to the nearest hundredth which is what you'd likely use for simple financial calculations. More complex calculations would require that the developer allocated a larger number of decimal digits. – Justin Cave Mar 28 '11 at 17:52
-
1What you're advocating is a fixed-point base 10 data type with programmer-specified precision. I'm saying that the programmer-specified precision is mostly pointless, and will just lead to the sorts of errors I used to run into in COBOL programs. (For example, when you change the precision of variables, it's real easy to miss one variable the value runs through. For another, it will take a lot more thinking about intermediate result size than is good.) – David Thornley Mar 28 '11 at 17:59
-
4A
Float(2)
like you propose should not be calledFloat
, since there is nothing floating here, certainly not the "decimal point". – Paŭlo Ebermann Mar 28 '11 at 19:59
As other answers have noted, the only real way to avoid floating point pitfalls in financial software is not to use it there. This may actually be feasible -- if you provide a well-designed library dedicated to financial math.
Functions designed to import floating-point estimates should be clearly labelled as such, and provided with parameters appropriate to that operation, e.g.:
Finance.importEstimate(float value, Finance roundingStep)
The only real way to avoid floating point pitfalls in general is education -- programmers need to read and understand something like What Every Programmer Should Know About Floating-Point Arithmetic.
A few things that might help, though:
- I'll second those who ask "why is exact equality testing for floating point even legal?"
- Instead, use an
isNear()
function. - Provide, and encourage use of, floating-point accumulator objects (which add sequences of floating point values more stably than simply adding them all into a regular floating point variable).

- 2,727
- languages have Decimal type support; of course this doesn't really solve the problem, still you have no exact and finite representation of for example ⅓;
- some DBs and frameworks have Money type support, this is basically storing number of cents as integer;
- there are some libraries for rational numbers support; that solves problem of ⅓, but doesn't solve the problem of for example √2;
These above are applicable in some cases, but not really a general solution for dealing with float values. The real solution is to understand the problem and learn how to deal with it. If you're using float point calculations, you should always check is your algorithms are numerically stable. There is huge field of mathematics/computer science which relates to the problem. It's called Numerical Analysis.

- 20,806
Most programmers would be surprised that COBOL got that right... in the first version of COBOL there was no floating point, only decimal, and the tradition in COBOL continued until today that the first thing you think of when declaring a number is decimal... floating point would only be used if you really needed it. When C came along, for some reason, there was no primitive decimal type, so in my opinion, that's where all the problems started.

- 7,091
-
1C didn't have a decimal type because it isn't primitive, very few computers having any sort of hardware decimal instructions. You might ask why BASIC and Pascal didn't have it, since they weren't designed to conform closely to the metal. COBOL and PL/I are the only languages I know of the time that had anything like that. – David Thornley Mar 28 '11 at 21:54
-
OK, then I ask instead... why didn't the C standard library have any decimal arithmetic? – JoelFan Mar 28 '11 at 21:59
-
3@JoelFan: so how do you write ⅓ in COBOL? Decimal doesn't solve any problems, base 10 is just as inaccurate as base 2. – vartec Mar 28 '11 at 23:01
-
2Decimal solves the problem of exactly representing dollars and cents, which is useful for a "Business Oriented" language. But otherwise, decimal is useless; it has the same kinds of errors (e.g., 1/33=0.99999999) while being much* slower. Which is why it's not the default in languages that weren't specifically designed for accounting. – dan04 Mar 29 '11 at 00:12
-
1And FORTRAN, which predates C by more than a decade, doesn't have standard decimal support either. – dan04 Mar 29 '11 at 05:09
-
@vartec, when was the last time you used 1/3 in a BUSINESS application? Decimal solves all problems in BUSINESS arithmetic – JoelFan Mar 29 '11 at 16:10
-
@David, "very few computers having any sort of hardware decimal instructins?" I guess you are not familiar with mainframes which have had BCD for many decades – JoelFan Mar 29 '11 at 16:13
-
1@JoelFan: if you have quarterly value and you need per month value, guess what do you have to multiply it by... no, it's not 0.33, it's ⅓. – vartec Mar 29 '11 at 16:24
-
1@JoelFan: I'm familiar with them. I've programmed them. They're probably about the only computers with microcode-level decimal arithmetic in use now, and there are very few of them, for any reasonable use of "very few" when applied to numbers of computers. – David Thornley Mar 29 '11 at 16:25
-
1A major difficulty with bringing COBOL-style numerics to .NET or Java is that the number of fixed-point types is very large, and the frameworks don't provide a good way for a "SET X TO Y" method to deal sensibly with all the different combinations of type that may be involved. If precision is a characteristic of instances rather than storage-location types, there's no way to make
x=y
converty
to the precision ofx
before the assignment. One could use a mutable structure type and sayx.Set(y);
, but method invocation on mutable structure types is fraught with problems. – supercat Feb 10 '14 at 23:34 -
@DavidThornley: Was hardware support for decimal calculations rare in the minicomputer/mainframe world? The 4004, 8080, Z80, 6502, 8088, and 68000 all included hardware support for decimal calculations; at least in the smaller micros such support was widely used in video games (nearly game on the Atari 2600 which shows a score in decimal either uses BCD or else--for low-scoring games--defines separate shapes for every possible score). – supercat Jun 12 '14 at 15:41