Fucking Javascript

>need a job
>learn node and get a job as a backend dev
>whatever it pays
>create remittance file
>pull values from database with type Numeric(8,2)
>ok whatever
>let amt = (transactionAmount*100).toString().padStart(LEN, '0')
>works cool
>next day
>"user what the fuck"
xxxxxxxxxxxxxxxxxx 000015000
xxxxxxxxxxxxxxxxxx 523710.00000000006
xxxxxxxxxxxxxxxxxx 000333000
xxxxxxxxxxxxxxxxxx 000005000
xxxxxxxxxxxxxxxxxx 14019.999999999998


What the fuck dude I know I should have just parseInt but still what the fuck is this shit this is bullshit fuck javascript fuck this stupid language

>tl;dr
0.1+0.2 === 0.3 // false
0.1+0.2 === 0.30000000000000004 // true

Attached: 1401109333290.jpg (300x360, 35K)

Please respond

This is true in any language retard.

another "javascript sucks" bait thread, now that's a good one

I just checked C, I get 0.1+0.2 === 0.30000

How is it bait, so far I've enjoyed Node but the type handling has been the worst experience by far

This is how floating point numbers work, it's not exclusive to JS. I hope you didn't put this in prod - it if wasn't in prod then why are you crying about it? You learned something and you can fix it so it works before you put it in prod. What's this thread about user? Do you just want to chat to someone?

kill yourself op

>I just checked C, I get 0.1+0.2 === 0.30000
Add a few 0s and you'll get there

#include

int main()
{
long double a = 0.1;
long double b = 0.2;

printf("%.30f\n", 0.1f+0.2f);
printf("%.30f\n", 0.1+0.2);
printf("%.30Lf\n", a+b);
return 0;
}


(vim) /tmp % ./test
0.300000011920928955078125000000
0.300000000000000044408920985006
0.300000000000000016653345369377

>webshit doesn’t understand how numbers work

>what's a TypedArray

I hope you're not doing calculations of money with regular floating point variables.

how did you make it through high school

This is literally the first thing they ever tell you when you start working with floating point in literally any book or class ever.

I hope you get fired for this shit.

>What the fuck dude I know I should have just parseInt but still what the fuck is this shit this is bullshit fuck javascript fuck this stupid language

>It's javascript's fault you can't do floats correctly
Fuck off. The only language that does this shit correctly is surprisingly Go-lang. Even R finds a way to fuck this up.

Attached: CplusplusTwo.png (1369x528, 35K)

>The only language that does this shit correctly is surprisingly Go-lang.
"Correctly" is subjective, as Golang is merely providing a layer of abstraction over the fundamental behavior of the IEEE standard.

Can someone explain these floating point errors to a brainlet?

Attached: brainlettttt.jpg (800x450, 41K)

Computers are BASED 2 (Binary)
Decimals are BASED 10
Floating point errors happen when you try to represent a BASE 10 number as a BASE 2 number, then try to reconvert that BASE 2 number back to a base 10 number.

This is the extremely oversimplified brainlet version of it, anyways. If you take Digital Logic Design you'll understand it in far more depth.

I literally ask the 0.1+0.2 question every time on interviews for node devs in my team.
OP is obviously a brainlet barista.

take a look at ieee 754 rounding mechanisms.

Sorry I should have specified, I have a CS degree. Do we lose precision when we convert to binary? I dont understand how 0.1+0.2 = 0.3000000000023 when we have the bits to represent 0.1 and 0.2 without issue.

>we have the bits to represent 0.1 and 0.2 without issue
no, we don't

If you have a CS degree then just do it on paper yourself.
literally convert .1 and .2 to binary (round at 63 bits, since you need one for sign), add their binary representations, then convert it back to base 10. You'll see yourself it just doesn't work out on paper properly either.

real value 0.1
stored value 0.100000001490116119384765625
binary 00111101110011001100110011001101

real value 0.2
stored value 0.20000000298023223876953125
binary 00111110010011001100110011001101

0.1 + 0.2 = 0.3
0.100000001490116119384765625 + 0.20000000298023223876953125 = 0.300000004470348358154296875

Why not store everything behind the decimal as another integral number?

Attached: 1484966990277.png (372x508, 239K)

because fpu -> apu -> fpu for 0.1+0.2 is retarded, I don't even want to know how wasteful such thing could be.

>except perl6, haskell, and every other language that uses sane rational numbers
trivially refuted

you mean like packed BCD? Welcome to COBOL

Imagine computers used base 10 and humans used base 3 for some reason (I use this example because you're probably familiar with 1/3 being 0.3333333...). If the user entered something like 2.1 * 10 in base 3, the computer would see it as 2.333333333 * 3, and the result would be 6.999999999 instead of the correct 7. The user, who uses base 3, would see 20.222222222222222222121112120101 printed on their screen, instead of the expected result (21). This is because 10 = 2 * 5, so 3 is not one of its prime divisors.
Same thing happens in the real world where humans use base 10 and computers use binary. 10 = 2 * 5, and 5 is not a prime divisor of 2, so when you enter something like 0.1, it gets represented incorrectly in the computer's memory.

What a dumbass. It's a crime that anyone gave you a job to write code.

>using anything but bigdecimal
you brought this upon yourself, user