>need a job >learn node and get a job as a backend dev >whatever it pays >create remittance file >pull values from database with type Numeric(8,2) >ok whatever >let amt = (transactionAmount*100).toString().padStart(LEN, '0') >works cool >next day >"user what the fuck" xxxxxxxxxxxxxxxxxx 000015000 xxxxxxxxxxxxxxxxxx 523710.00000000006 xxxxxxxxxxxxxxxxxx 000333000 xxxxxxxxxxxxxxxxxx 000005000 xxxxxxxxxxxxxxxxxx 14019.999999999998
What the fuck dude I know I should have just parseInt but still what the fuck is this shit this is bullshit fuck javascript fuck this stupid language
another "javascript sucks" bait thread, now that's a good one
Aiden Wood
I just checked C, I get 0.1+0.2 === 0.30000
How is it bait, so far I've enjoyed Node but the type handling has been the worst experience by far
Andrew Robinson
This is how floating point numbers work, it's not exclusive to JS. I hope you didn't put this in prod - it if wasn't in prod then why are you crying about it? You learned something and you can fix it so it works before you put it in prod. What's this thread about user? Do you just want to chat to someone?
Andrew Lee
kill yourself op
Hunter Sullivan
>I just checked C, I get 0.1+0.2 === 0.30000 Add a few 0s and you'll get there
Joseph Davis
#include
int main() { long double a = 0.1; long double b = 0.2;
I hope you're not doing calculations of money with regular floating point variables.
Christian Sanders
how did you make it through high school
Caleb Cruz
This is literally the first thing they ever tell you when you start working with floating point in literally any book or class ever.
I hope you get fired for this shit.
Asher Nelson
>What the fuck dude I know I should have just parseInt but still what the fuck is this shit this is bullshit fuck javascript fuck this stupid language
>It's javascript's fault you can't do floats correctly Fuck off. The only language that does this shit correctly is surprisingly Go-lang. Even R finds a way to fuck this up.
>The only language that does this shit correctly is surprisingly Go-lang. "Correctly" is subjective, as Golang is merely providing a layer of abstraction over the fundamental behavior of the IEEE standard.
Austin Martin
Can someone explain these floating point errors to a brainlet?
Computers are BASED 2 (Binary) Decimals are BASED 10 Floating point errors happen when you try to represent a BASE 10 number as a BASE 2 number, then try to reconvert that BASE 2 number back to a base 10 number.
This is the extremely oversimplified brainlet version of it, anyways. If you take Digital Logic Design you'll understand it in far more depth.
Colton Hill
I literally ask the 0.1+0.2 question every time on interviews for node devs in my team. OP is obviously a brainlet barista.
Kevin Reed
take a look at ieee 754 rounding mechanisms.
Nathan Phillips
Sorry I should have specified, I have a CS degree. Do we lose precision when we convert to binary? I dont understand how 0.1+0.2 = 0.3000000000023 when we have the bits to represent 0.1 and 0.2 without issue.
Adam Cruz
>we have the bits to represent 0.1 and 0.2 without issue no, we don't
Dylan Ward
If you have a CS degree then just do it on paper yourself. literally convert .1 and .2 to binary (round at 63 bits, since you need one for sign), add their binary representations, then convert it back to base 10. You'll see yourself it just doesn't work out on paper properly either.
Jacob Martinez
real value 0.1 stored value 0.100000001490116119384765625 binary 00111101110011001100110011001101
real value 0.2 stored value 0.20000000298023223876953125 binary 00111110010011001100110011001101
because fpu -> apu -> fpu for 0.1+0.2 is retarded, I don't even want to know how wasteful such thing could be.
Joshua Sanchez
>except perl6, haskell, and every other language that uses sane rational numbers trivially refuted
Lucas Watson
you mean like packed BCD? Welcome to COBOL
Jaxon Lewis
Imagine computers used base 10 and humans used base 3 for some reason (I use this example because you're probably familiar with 1/3 being 0.3333333...). If the user entered something like 2.1 * 10 in base 3, the computer would see it as 2.333333333 * 3, and the result would be 6.999999999 instead of the correct 7. The user, who uses base 3, would see 20.222222222222222222121112120101 printed on their screen, instead of the expected result (21). This is because 10 = 2 * 5, so 3 is not one of its prime divisors. Same thing happens in the real world where humans use base 10 and computers use binary. 10 = 2 * 5, and 5 is not a prime divisor of 2, so when you enter something like 0.1, it gets represented incorrectly in the computer's memory.
Gavin Gray
What a dumbass. It's a crime that anyone gave you a job to write code.
Isaiah Williams
>using anything but bigdecimal you brought this upon yourself, user