Modern GPUs

So right now a GPU is
>larger, more power consuming and expensive than a CPU
>capable of running 16, 32, and 64 bit operations several times faster than the CPU does it.
>more technologically advanced
>utilized better memory at higher speeds

Why is it a secondary subservient device to the CPU?
Why arent we running our OS and software on our GPUs instead?

Other urls found in this thread:

superuser.com/questions/308771/why-are-we-still-using-cpus-instead-of-gpus
twitter.com/NSFWRedditImage

Eh

Because all those assumptions are bullshit. A gpu is something completely different from a cpu and built for completely differeng workloads

Because it's more expensive and doesn't have the same registers as a CPU does?

Because GPUs have thousands of weak cores and CPUs have a few strong cores, different use cases.

No one has built an OS that runs directly from the gpu. The gpu also doesn't have all the bits that a cpu requires.

superuser.com/questions/308771/why-are-we-still-using-cpus-instead-of-gpus

>TL;DR answer: GPUs have far more processor cores than CPUs, but because each GPU core runs significantly slower than a CPU core and do not have the features needed for modern operating systems, they are not appropriate for performing most of the processing in everyday computing. They are most suited to compute-intensive operations such as video processing and physics simulations.

>>larger, more power consuming and expensive than a CPU
An inherently serial device won't see as much gains from parallelism than an inherently parallel device
>>capable of running 16, 32, and 64 bit operations several times faster than the CPU does it.
Not necessarily.
>>more technologically advanced
No.
>>utilized better memory at higher speeds
What does that even mean.

tl;dr: you're an fuckeding retarded. Go learn how a computer works.

GPU is not a subservient device OP. GPU is a full blown computer with its own OS, interacting with your PC.

The GPU can be so powerful because it can focus on number crunching instead of handling interrupts, communicating with devices, and dealing with branchy control flow needed for operating systems
GPU memory is high latency and high bandwidth which would be terrible for an OS. CPU memory is low latency which is really important when switching context every few milliseconds and dealing with multiple cores running independently
GPUs use SIMD multithreading instead of MIMD multithreading which limits its flexibility a lot

I love how you all jerk your dicks over OS celebs like Stallman all day long but don't question the fact that none of them are good enough to write a parallelizable OS that runs on GPU hardware.

The reason software hasn't changed in 2 decades is because programmers are too lazy to do parallel programming. Serial is shit and we need to get past it.

>typical Sup Forums user

total handwaving non-answer

There are computers that do that - that is how the Raspberry Pi is architected, because it's an ARM core (or several ARM cores) glued onto the VC4, which is a vector processor plus several shader cores plus several vertex accelerators plus several fixed function blocks (plus like a million registers on the mailbox bus from hell); the VPU boots first, that typically bootstraps the realtime OS ThreadX which runs to service GPU calls send from the ARM's OS, which boots afterwards.

GPUs, too, typically have an embedded CPU core dedicated to handle the scheduling, and that does indeed run a specialised OS (the ones I've seen run ARM).

The architecture of the shader part of GPUs is dedicated to running a huge number of thread clusters without a lot of branches. They are very deeply pipelined with basically no branch prediction logic. Running general purpose code on them would be pretty slow compared to a CPU. The memory mapping doesn't work the same way, either, but all that's hidden from you when using a GPU because the compiler hides some of the nastier bits of the hardware from you, even to the point of taking your own function and quietly replacing it with a completely different function that does almost the same job (at least partly because due to the quick and extremely competitive pace of development of especially the two main GPU vendors, a surprising amount of shit just plain doesn't work). This is one reason GPU drivers are often closed-source.

Highly parallelised monolithic tasks with bulk computational requirements, not a lot of inter-task communication and essentially no branches run very well indeed - HPC things like crypto computations (including, yes, cryptocoin mining, at least where the functions are not currently beaten by specialised ASICs), some simulations, that kind of thing. Some neural net code is a good fit, too. Most of nVIDIA's money now comes from supercomputing.

I'm not answering your retarded question.

A cpu is like 4 normal adults and a gpu is like 100 retards.

An AMD cpu is like 4 adults with slight retardation

Because you're making the assumption a GPU is more advanced than a CPU. In what terms?

A CPU core is orders of magnitude more complex and advanced than a GPU core. There's a gazillion of subsections on a CPU not present on a GPU, not to mention a CPU core runs much faster than a GPU core.

A GPU is appropriate for heavily parallelized simple workloads like 3D rendering because while a common CPU has 4 or 8 cores, a GPU has much much more and can pull ahead. Unfortunately you can't run low-threaded applications on a GPU efficiently because each core is too slow and also lacks all the CPU features.

>all tasks can be easily parallelized if we all wish for it really strong
>multi-threading has no overhead

I thought Sup Forums knew better.

You mean 8 adults.

Many tasks can be parallelized but often they require heavy synchronization which destroys the performance
Another approach is instead of running a hard-to-paralellize-task on 4 cores, run 4 hard-to-paralellize-task each on it's own core. I believe this could be applied more than it is today but it requires extra infrastructure and developers are lazy.

The GPU is like a huge truck that requires a driver to be used which is the CPU.

To oversimplify:

A CPU is like a thin, high pressure jet of water.
A GPU is like a low pressure big pipe.

If some things have to happen before others, it doesn't bother the CPU since everything is in order (serial). The GPU has to wait because things are happening slowly but at the same time (parallel), so it ends up being much slower.

Because GPUs are made for 1 job