The ultimate red pill about multiple threads: It's the GPU's job to do it

GPUs are by definition parallel machines. CPUs are by definition serial machines. When NVIDIA sticks an iCPU in there is inefficient bastardization and when Intel sticks a iGPU in there is inefficient bastardization. In any case, even if you stick them together in the same chip and both are strong, they are different machines for different jobs.
Now, in practice, that's exactly how it works. A game or another desktop application will usually have to render stuff and that rendering will be more efficiently done on the GPU level. The CPU's job is not to render, the CPU's job is to feed the core serial logic to the GPU.
If you go to specific technologies, that's exactly when the wishful thinking for the opposite fails. Vulkan/nuOpenGL/nuDirect3D claim to work better with multithreading. But what they often don't tell you is that they offload even MORE CPU stuff with their batch input.

Other urls found in this thread:

stackoverflow.com/questions/27333815/cpu-simd-vs-gpu-simd
twitter.com/NSFWRedditVideo

kys retard.

make me kid

Youre a fucking idiot who doesnt understand why the gpu was made in the first place.

Why can't be gpu utilized as cpu in a desktop? Will there ever be a change in consumer computing so industry will switch from cpu+gpu to dual gpu or gpu only based computers?

>all the vitriol being spewed at OP by people who are angry at the truth
redpill confirmed
thanks OP, good thread

No, not everything can be parallelized to take advantage of A GPU.

REEEEEE stop using graphics cards for mining bitcoins and other cryptocurrency! Use CPUs !!! You can't cheat with them like you keep doing with power supply's + graphics cards with no motherboards, no memory, no HDD, no thing.

The single core performance is shit.
The whole architecture is designed around SIMD, which isn't the shape the majority of your computations take, especially not OS level ones.
The current GPUs don't have interrupts or support for virtual memory, which is sort of a roadblock to running operating systems on them.

We may see the two merge more and more, like the current trend having APUs instead of true CPUs in many machines, but SIMD architectures are not some magical silver bullet that makes speed appear out of thin air, they just do a couple things really really well.

> Why can't be gpu utilized as cpu in a desktop?
It can be used as a co-processor of sorts.

But it isn't a CPU, it's not even sitting in the more adequate location where a CPU sits.

Single-core performance is non-existant

enlighten us child, what you typed is a tantrum, not an argument.

The main gist is that CPUs are great for doing "big things in serial". GPUs are almost the opposite, they are amazing for doing "simplistic things in parallel". As a result a CPU is great for doing the main core logic that is needed in a complex manner by the highest level programming languages and GPUs are great for telling them "take this little triangle, enlarge it by 10, put some pretty colors on it, now make it prettier, enhance" and other simplistic shit like that may look complex in the end but they are very simple declarative statements that don't really require any complex lobic.

Here's a nice thread on the matter

stackoverflow.com/questions/27333815/cpu-simd-vs-gpu-simd
--
It's a similar idea, it goes kind of like this (very informally speaking):

The CPU has a set amount of functions that can run on packed values. Depending on your brand and version of your CPU, you might have access to SSE2, 3, 4, 3dnow, etc, and each of them gives you access to more and more functions. You're limited by the register size and the larger data types you work with the less values you can use in parallel. You can freely mix and match SIMD instructions with traditional x86/x64 instructions.
The GPU lets you write your entire pipeline for each pixel of a texture. The texture size doesn't depend on your pipeline length, ie the number of values you can affect in one cycle isn't dependant on anything but your GPU, and the functions you can chain (your pixel shader) can be pretty much anything. It's somewhat more rigid though in that the setup and readback of your values is somewhat slower, and it's a one shot process (load values, run shader, read values), you can't massage them at all besides that, so you actually need to use a lot of values for it to be worth it.

And
--
Both CPUs & GPUs provide SIMD with the most standard conceptual unit being 16 bytes/128 bits; for example a Vector of 4 floats (x,y,z,w).

Simplifying:

CPUs then parallelize more through pipelining future instructions so they proceed faster through a program. Then next step is multiple cores which run independent programs.

GPUs on the other hand parallelize by continuing the SIMD approach and executing the same program multiple times; both by pure SIMD where a set of programs execute in lock step (which is why branching is bad on a GPU, as both sides of an if statement must execute; and one result be thrown away so that the lock step programs proceed at the same rate); and also by single program, multiple data (SPMD) where groups of the sets of identical programs proceed in parallel but not necessarily in lock step.

The GPU approach is great where the exact same processing needs be applied to large volumes of data; for example a million vertices than need to be transformed in the same way, or many million pixels that need the processing to produce their colour. Assuming they don't become data block/pipeline stalled, GPUs programs general offer more predictable time bound execution due to its restrictions; which again is good for temporal parallelism e.g. the programs need to repeat their cycle at a certain rate for example 60 times a second (16ms) for 60 fps.

The CPU approach however is better for decisioning and performing multiple different tasks at the same time and dealing with changing inputs and requests.

Apart from its many other uses and purposes, the CPU is used to orchestrate work for the GPU to perform.

>GPU is good at logic and can replace CPU cores

uhuh

Ignore the retarded idiot, he's been posting links to this thread in several other Ryzen threads trying to prove some kind of point.

>some kind of point.
we know you are mentally challenged, no need to announce it

>still no point

4/8 is the sweet spot for gamers and the desktop and it will be the sweet spot for at least 10 years. There is a chance GPUs will be even more important.
That means there is a chance CPU threads will be even LESS important.

In fact that is EXACTLY what Vulkan et al are doing. While they claim to allow more CPU cores, in reality they offload even more stuff from the CPUs and into the GPU logic. This is because the greatest contribution they offered was the batch scripting of GPUs, and NOT the CPU multithreading.

>stuff from the CPUs and into the GPU logic
this is not how gpgpu works

That's how a strawman argument works, by reading your message.

>yes, if you enable ECC support in the BIOS so check with the MB feature list before you buy.
>by AMD_James

The only thing that could not work is maybe those insane priced LVRDIMMs

Elaborate

RDIMMs aren't supported, but you can use normal unbuffered ECC DIMMs

>GPUs are by definition parallel machines. CPUs are by definition serial machines.
I think you need to look up the definition of definition.

He probably got it from that Amdahl's law shitposter

>GPUs are by definition parallel machines. CPUs are by definition serial machines.
neither of these terms preclude either of these concepts. the nature of a piece of hardware is analogous to the nature of the problem it tries to solve. if we ever find a different, more efficient way to design GPUs without thousands of CUDA cores or SPUs, they will STILL be GPUs.

>When NVIDIA sticks an iCPU in there is inefficient bastardization and when Intel sticks a iGPU in there is inefficient bastardization.
there is no cosmic law that dictates this. iGPUs are inefficient due to memory I/O. expect this problem to disappear before the end of the decade.

I'm not sure what you're trying to convey with "iCPU", but anything of the sort would most likely be some form of ASIC (eg for x264 transcoding). there is no reason for Nvidia to manufacture and sell complex CPUs along with its GPU products. the only exception is their Tegra line, which is by no means a "bastardization".

>The CPU's job is not to render, the CPU's job is to feed the core serial logic to the GPU.
the CPU's job is also to make draw calls via graphics APIs, and this is by no means a simple ordeal. in fact, Vulkan and DX12 exist for the purpose of reducing the CPU's role in this. as scenes grow more complex the load on the CPU will inevitably increase, requiring superior hardware.

you are out of your depth, OP.

>ye, but, i can make shitty unrealistic stupid code if I want to
: the post

Can you hurry up and give yourself a trip so I can block your stupid ass?

go back to plebbit faggot kid