Why aren't you programming in D?

Why aren't you programming in D?

I don't need the D.

Because Python is faster.

>system time is 20x faster on a cherry picked benchmark.

seems good to me

>cherry picked
It's the benchmark of choice of some Dfag that used to spam every Python thread with his dead language.
Pure CPython is a horrible choice for number-crunching so he thought it was funny. He's not laughing now.

>sys time
I guess the bulk of it is from llvm code.

>JIT on a tight loop speeds it up
Who would have guessed

>d compiler has all the time of this world and it can't optimize a simple loop better than a jit which has very little time available

JIT gives the interpreter information that allows it to better the optimize the loop the longer it runs, something a compiler can't do statically at compile time.

That's the shittiest prime checker I've seen in a while, and I browse the euler submissions regularly.

You can do better?

>oh wow, a JIT is faster in a trivial, numeric benchmark

Garbage collectors belong nowhere near a language that allows raw access to pointers

...

Your point? It's still JIT'ed.

Try with dmd 2.076 and -betterC

Also, this.

your point being?

toy language not used in the real world that's based on an overcomplicated mess of a language

It means it is still a worthless way to benchmark programming languages meant to create actual software.

I get the exact same time with and without -mcpu=native
i7 4700

>if _______name_______ ====== """_______""""main""""_______"""
this will never be acceptable for me

Do you have a better idea? How is it better than a function with a magic name?

worse than*

>40% improvement
I am having a hard time believing that.

Did d finally stop breaking itself ? Or are the devs still being autistic and breaking their own language I.e adding a string concat that kills the string split.

Backwards compatibility has been a concern for them for a while

It lacks unnecessary unrest on the eyes, like all non-alpha letters.
Even better would be having something like "main" as default starting point but being able to pass a custom name along with the number of arguments to the compiler.
>inb4 b-but Python is an interpreter
That was a mistake in the first place.

>very little time available
It could have spent entire milliseconds on that loop. It has incredible amounts of time.

No difference on broadwell.
Skylake got a few new vectorized instructions but those aren't used.
This guy is full of shit, probably messed with the source.

what version of ldc2 are you using? i'm using 1.2 on haswell here

>probably messed with the source
I mean, all bets were off once the python guy used JIT

OOP in D is even more retarded than C++.

This.