Learn to Mutex bitch

Daily reminder that your 8-core CPU is utterly useless on anything below ~80% pure parallelization, for example not using at all the computer at the time and waiting for something to encode or render.
It's not even something that will improve over time. CPUs are by definition serial machines. They will always require that speed on any interactive application.
This is because however you beg game or desktop programmers to give you 100% parallelization during interactivity they will never do it because of mutexes.

Other urls found in this thread:

tomshardware.co.uk/multi-core-cpu-scaling-directx-11,review-33682-2.html
twitter.com/SFWRedditVideos

>AMDahl's Law
moar cores moar cores

Yeah, because everyone uses only one program at a time.

Windows alone has like 30+ system processes running at all times. Saving applications from having to share with these processes means those applications can run faster in theory. Plus speedstep/coolnquiet throttles down cores that aren't being fully utilized and turbo overclocks ones that are.

Most of those other programs are idle on nearly 0% CPU usage.
That includes the 99 tabs of your 100-tabbed Chrome.

Those processes run at most 2% of the CPU time at most times.
If you remove most of the Windows botnet, even less.

...

>Those processes run at most 2% of the CPU time at most times.
>source: my anus

>Enthusiast CPUs for gamers/data crunchers/content creators aren't required for general computing

Well woop-de-shit. Next you'll be telling me I don't need a GTX 1080 to browse mongolian fishing expedition blogs.

>key is in inverse order of what's shown on the chart
This is dildos.

> he can't even run a fucking task manager

you'd be surprised how many memesters have the delusion "it's only a matter of time for my gayme to be 100% parallelized".

>CPUs are by definition serial machines.
Wrong!


>It's not even something that will improve over time.
>they will never do it because of mutexes.
There's a pretty obvious contradiction here.
Mutexes are not the only way to do concurrency.
It is possible to automatically prove concurrent protocols correct.
Mutexes are shit tier concurrency primitives.
Basically: better, safer concurrency => easier parallelization
In addition to this, there are problems that are very easy to solve concurrently, such as running independent, heavy programs in your OS!

Mutex is fucking easy.

It is literally just 3 more lines of code for every critical section in C.

Which makes it extremely slower and not actually parallelized.

Yes smartass, there are improvements, but the stupid meme "my gayme will be 100% parallelized eventually" is utterly stupid.

Don't games use more than one process? I'm not just talking about one process being parallelized but two independent processes being run on two separate threads.

>It's not even something that will improve over time.
>It's not even something that will improve over time.
>It's not even something that will improve over time.

>the stupid meme "my gayme will be 100% parallelized eventually" is utterly stupid.
Nobody disagreed with that, friend!
I would just like to add that this is also because there are algorithms which provably cannot be formulated in a concurrent manner.

I would also like to add that parallelization for performance isn't the only reason why you'd want to write concurrent software!

Their content is not predetermined. A large part of th it must be created on the fly based on human input. That is an extremely interactive process that is in dire need of an extreme amount of safeties for the common libraries.
In practice they can never be fully parallelized like x265 encoding for instance.

You made this exact same thread before and got blown the fuck out by facts and stopped replying. Want me to copy paste from the archive?

By the way, how much percentage do you think a common FPS game can achieve on that graph? I'd say more than 40-50% is wishful thinking.

your delusions do not count as proof.
also some of us have lives, not being around for 24/7 proves nothing
are you literally stupid to not get that?

>facts are delusions now
woah 2deep5me

you have literally posted 0 arguments here child.

>Their content is not predetermined. A large part of th it must be created on the fly based on human input.
This isn't enough to explain why concurrency is hard.
The "content" of two "threads" (processes in the sense of CSP) doesn't affect the difficulty to write concurrent code at all.
The interactions between these processes is what matters most!
The more and the more complex interactions resulting from user input, the harder it will be to write a correct concurrent program (given you aren't using a type system that is capable of proving your protocol correct, which might very well be possible in the future).

Well, that's what I mean to be honest to be the inner indirect reason, by having human input then de facto you create a whole conglomerate of interactions that make parallelization harder.

What are better concurrency primitives? Are we talking about thread-safe data-structures? Or something else?

It completly depends on what kind of tasks you implement concurrently.
Rendering calculations are some of the most expensive in games nowadays - and those are already being parallelized very heavily by graphics cards!
I cannot make an estimate, though.

>Child
You sure are obsessed with children. You're probably a fucking pedo.

Anyways, pedofuck, here's some benchmarks in real world applications. According to you, they exhibit per-thread scaling that should be impossible. Get fucked.

tomshardware.co.uk/multi-core-cpu-scaling-directx-11,review-33682-2.html

Well, the GPUs are by definition parallel machines anyway, the topic is the CPU user space only.

> he proceeds to post that an 8 core is only 8% faster than a 4 core
yes, we know there a returns, the point is that they diminish largely

Also, the -Es have 20M cache and the regular i7s have only 8M max.

>then de facto you create a whole conglomerate of interactions
This is true, although the implication that more user interaction => more interactions between processes isn't true for many interesting and worthwhile problems.

Take a look at one of the few process calculi out there, such as the pi-calculus or CSP.
There's certainly more *intuitive* ways to model concurrent applications than mutexes.
Proving the correctness of protocols is still an ongoing research matter, although there are already some neat demonstrational libraries and frameworks.

>Well, the GPUs are by definition parallel machines anyway, the topic is the CPU user space only.
Well, rendering is a problem just like any other.
I don't think that just because there exist seperate hardware components to handle this particular problem this problem should get excluded from the argument.

Underrated post