cross-posted from: https://lemmy.ml/post/24332731
StolenCross-posted from here: https://fosstodon.org/@foo/113731569632505985
negative zero is real
Well, duh. All fractions are real.
except for the imaginary ones
Until you construct a square with them.
ℚ ⊂ ℝ ⊂ ℂ, at least that’s how I was taught.
LOL! Man I learned that in college and never used it ever again. I never came across any scenarios in my professional career as a software engineer where knowing this was useful at all outside of our labs/homework.
Anyone got any example where this knowledge became useful?
It’s useful to know that floats don’t have unlimited precision, and why adding a very large number with a very small number isn’t going to work well.
Yeah but you don’t really have to think about how it works in the background ever time you deal with that. You just know.
You learn how once and then you just remember not to do that and that’s it.
I agree we don’t generally need think about the technical details. It’s just good to be aware of the exponent and mantissa parts to better understand where the inaccuracies of floating point numbers come from.
And this is why f64 exists!
If you’re doing any work with accounting, or writing test cases with floating point values.
Please tell me you aren’t using floating points with money
Knowing not to use floating point with money is good use of that knowledge.
You’d be dismayed to find out how often I’ve seen people do that.
Yeah I shudder when I see floats and currency.
Eh, if you use doubles and you add 0.000314 (just over 0.03 cents) to ten billion dollars you have an error of 1/10000 of a cent, and thats a deliberately perverse transaction. Its not ideal but its not the waiting disaster that using single precision is.
That sounds like an explosive duo
No. I don’t have to remember that.
I just have to remember the limits and how you can break the system. You don’t have to think about the representation.
How long has this career been? What languages? And in what industries? Knowing how floats are represented at the bit level is important for all sorts of things including serialization and math (that isn’t accounting).
Since 2008.
I’ve worked as a build engineer, application developer, web developer, product support, DevOps, etc.
I only ever had to worry about the limits, but never how it works in the background.
Accumulation of floating point precision errors is so common, we have an entire page about why unit tests need to account for it.
In game dev it’s pretty common. Lots of stuff is built on floating point and balancing quality and performance, so we can’t just switch to double when things start getting janky as we can’t afford the cost, so instead we actually have to think and work out the limits of 32 bit floats.
So you have to remember how it’s represented in the system with how the bits are used? Or you just have to remember some general rules that “if you do that, it’ll fuck up.”
Well, rules like “all integers can be represented up to 2^24” and “10^-38 is where denormalisation happens” are helpful, but I often have to figure out why I got a dodgy value from first principles, floating point is too complicated to solve every problem with 3 general rules.
I wrote a float from string function once which obviously requires the details (intentionally low quality and limited but faster variant since the standard version was way too slow).
https://h14s.p5r.org/2012/09/0x5f3759df.html comes to mind
To not be surprised when
0.1 + 0.1 != 0.2
Writing floating point emulation code?
I’d pretty much avoided learning about floating point until we decided to refactor the softfloat code in QEMU to support additional formats.
The very wide majority of IT professionals don’t work on emulation or even system kernels. Most of us are doing simple applications, or working on supporting these applications, or their deployment and maintenance.
finally a clever user of the meme
It’s not mimicry.