Haha

Haha.

Hahahahahaha.

agner.org/optimize/optimization_manuals.zip

>using KDE
Now THAT'S horrifying.

So basically, once developers start optimizing for these execution units, shit's going to get real?

>KDE
Draw a dicc on your face if u r in danger

Sounds about right.

But it's a politics game at this point, the consumer market is hard to win over, but the high end server market writes code that targets specific architectures so they can get full advantage of this.

Seems that Linux compilers do take some advantage to a degree, but it's pretty application specific.

Whoever he is he's lying through his teeth

>c-cell is going to revolutionize everything! devs just have to get used to it!
>8 cores with each core 1/4th as powerful as modern x86 cores are going to revolutionize everything! devs just have to get used to it!
I've heard this same story so many times. It never happens, and even when it does, by that time the CPU is so out of date it's unusable anyways.
I REALLY don't think this issue in particular is that big a deal, though.

literally just vector instructions, who uses vector instructions on fucking CPU?

Oh well, Skylake-EX/EP is DOA then.

Compilers

Why does Ryzen compile shit faster then?

Ryzen is shit

>Whoever he is
>he doesn't know who Agner fog is

>who uses vector instructions on fucking CPU?
anyone know? I have no idea, it's so niche
xeon phi does it, right? that's it

Because its a very para load by its nature and doesn't need intensive scheduling like a real time gayme. You could have a 1000 cores and GCC will use them all

>You could have a 1000 cores and GCC will use them all
Actually, compiling is extremely serial, so compilers never bother to multithread, because there is basically zero performance advantage.
You can run multiple instances of the compiler, though. It'll only work for the number of files you have.

That doesn't seem to be the case at all.

that is why article says that ryzen has higher IPC

Holy shit that 1700 is so fucking tempting, basically is powered off the friction of molecules in air and its MT capabilities are just a lick from the 6900k, I can plop it in a $70 motherboard and I get a decent cooler on top of it.

This is seriously the best chip in the Ryzen lineup, only the 1600 comes close.

That wasn't me that replied with the graph.
I don't have that much of a deep understanding as far as optimizing software goes and was mostly just parroting. I'll need to do some reading about it on my own as my school is a joke.
Anyway, I was under the impression that there were two big groups of software as far as multi core utilization goes. You have things like vidya that are hard to schedule because they are interactive and you cannot predict the future work load. Then in the other group you have software that perfectly utilizes multiple cores and I was under the impression that compiling was one of them. VIdeo rendering being another one.
Maybe I'm taking what you're saying wrong but compiling is a bunch of small tasks and those individual tasks rely on single thread performance but they can be spread across all cores? How is this different than any other easily parallel task?

When I said "compiling is extremely serial", I really mean "compiling a single file is extremely serial". There are many stages to compiling a file, and they all rely on the previous one. There is no point multithreading because the threads will always be stuck waiting for the previous stage to be finished.
That's just a single file, though. You can compile different files independently of each other. So these "compiling with" benchmarks with N files (N might equal the number of cores, but maybe not), opening a different instance of a compiler for each of them.
Projects like Linux have literally thousands of different files to compile, so seeing how quickly you can compile all of them is a pretty good benchmark.

I understand, but what about software that isn't as complex as the Linux kernel. Say you wanted to compile firefox or other similar desktop applications from source in gentoo or something, would they also benefit from the many threads?
If I were to compile one singular C or C++ file, one long ass 1230912093 line source code file, it would only use one thread?

These things are easily googleable

>firefox
That's not a good example, as it's also massive. It's also C++, so it means that compile times are REALLY fucking slow. C programs compile much quicker.
>would they also benefit from the many threads?
Yes. Most non-trivial programs have lots of source files.
>If I were to compile one singular C or C++ file, one long ass 1230912093 line source code file, it would only use one thread?
Yes.

>chromium takes 9x longer to compile than the linux kernel
browsers have gone too far

C++ was a mistake.

Browsers are C++ and plenty of java/js/python/etc

Linux kernel is mostly C

I enjoy talking to people sometimes and it's a some what difficult thing to google. What would the query look like "how parallel is gcc", "nature of compilation". I could have but this guy that responded to me sounds very knowledgeable and so I asked him. Fuck you.
Thank you

>matrix and vector calculations
>buffer mixing
>fast memcpy or string operations
>taylor series calculations

There are a lot of uses. They're pretty popular in 3d graphics and audio libraries. They only really matter if you're doing something where performance really matters, since SSE and AVX are a pain in the ass to work with.