> 0.1 + 0.1 + 0.1 == 0.3
False
I always tell my students that if they (might) have a float, and are using the `==` operator, they're doing something wrong.Well, there are many legitimate cases for using the equality operator. Insisting someone is doing something wrong is downright wrong and you shouldn't be teaching floating-point numbers. A few use cases are: Floating-points differing from default or initial values and carrying meaning, e.g. 0 or 1 translates to omitting entire operations. Then there is also the case for measuring the tinyest possible variation when using relative tolerances are not what you want. Not exhaustive. If you use == with fp, it only means you should've thought about it thoroughly.
That has more to do with decimal <-> binary conversion than arithmetic/comparison. Using hex literals makes it clearer
0x1.999999999999ap-4 ("0.1")
+0x1.999999999999ap-4 ("0.1")
---------------------
=0x3.3333333333334p-4 ("0.2")
+0x1.999999999999ap-4 ("0.1")
---------------------
=0x4.cccccccccccf0p-4 ("0.30000000000000004")
!=0x4.cccccccccccccp-4 ("0.3")I have a relaxed rule for myself: if I’m using the == operator on floats, I must write a comment explaining why. I use == for maybe once a year.
I also like how a / b can result in infinity even if both a and b are strictly non-zero[1]. So be careful rewriting floating-point expressions.
[1]: https://www.cs.uaf.edu/2011/fall/cs301/lecture/11_09_weird_f... (division result matrix)
.125 + .375 == .5
You should be using == for floats when they're actually equal. 0.1 just isn't an actual number.
I would argue that
double m_D{}; [...]
if (m_D == 0) somethingNeedsInstantiation();
can avoid having to carry around, set and check some extra m_HasValueBeenSet booleans.Of course, it might not be something you want to overload beginner programmers with.
There’s plenty of cases where ‘==‘ is correct. If you understand how floating point numbers work at the same depth you understand integers, then you may know the result of each side and know there’s zero error.
Anything to do “approximately close” is much slower, prone to even more subtle bugs (often trading less immediate bugs for much harder to find and fix bugs).
For example, I routinely make unit tests with inputs designed so answers are perfectly representable, so tests do bit exact compares, to ensure algorithms work as designed.
I’d rather teach students there’s subtlety here with some tradeoffs.