Why don't motherboards just have a GPU socket and GDDR VRAM DIMMs?

Why don't motherboards just have a GPU socket and GDDR VRAM DIMMs?

Because it'd be fucking stupid.

y tho

packaging
if it aint broke
thermals

SHUT THE FUCK UP. THIS ISN'T A CONVERSATION

touchy...

> Why don't motherboards just have a GPU socket
Does it look like a typical gaming GPU would fit, being currently like half the size of a motherboard?

Also, the fast PCI socket IS quite strongly a GPU socket.

> GDDR VRAM DIMMs
Okay, let's start with "why"? RAM is actually already too fast for basically all users.

And I recall the GDDR thing had twice the throughput, twice the latency. That wouldn't be a good trade-off even if you wanted "faster" RAM, which as I just indicated, you probably don't really.

GPUs use a very wide memory bus (256 or 384 bits) and very strict timings. A socketed GPU like a 1070 would need as much mainboard space as a HEDT CPU with four-channel memory, and work slower. So it's kinda pointless, what we really need to reduce faggotry is just a standardized GPU cooler mount.

you already have a PCI slot for modular graphics.

> GDDR
But HBM2 is what the cool kids use on their 4096 bit wide interfaces. GDDR5 is for 384bit loosers.

Contrary to what these new brainlets are saying, it's not a terrible idea, there is just little to gain vs the large cost for the industry of standardizing on various sockets, cooling, etc. Also restricts their flexibility.

Whoa there, slow down. Not only have you misinterpreted OP by thinking he was proposing REPLACING system ram with vram (or you're just spouting unrelated shit?), but you also need to look up "slot vs socket" and "gpu", as you've indicated misunderstanding of both these terms. The "GPU" is nowhere near "half the size of a mobo", you're obviously referring to the graphics card.

> Not only have you misinterpreted OP by thinking he was proposing REPLACING system ram with vram (or you're just spouting unrelated shit?)
Yea, because unless you're combining the system RAM with the GPU RAM, you're not really gaining shit. You're just fixing the bus width and memory the GPU can use to what your mainboard has for no actual gains.

> "slot vs socket"
I'm sure he was worried about the shape.

> The "GPU" is nowhere near "half the size of a mobo", you're obviously referring to the graphics card
Yea, and once you put all these other currently external components minus the RAM into a new, bigger GPU package or on the board and then stick a cooler on top, it's still going to be just as big.

>GPU socket
They need a major cooling solution that is variable to the gpu being used. This isnt something a motherboard maker would account for.

NVlink on servers already works like that, and even then it's mainly because of HBM that's on-chip. It's unrealistic to have VRAM as a separate component, as the tolerances are too small for it to be separate.

Why don't GPUs have CPU socket and slots for RAM DIMMs?

There was a time where Graphics cards actually had DIMM slots, pre 2000 era, but still.

> GPU socket
Because PCI-E is an industry standard that works better

> and GDDR VRAM DIMMs?
Because of latency, cross compatibility issues, and subsequent VRMs

The makers want you to upgrade whole cards every year or so. They want your hard earned cash, they do not want you sending your money to the ram companies.

Only some high end workstation cards tho

there was also a time where the integrated gpu meant integrated onto the motherboard, not the cpu
either as it's own chip, like any modern gpu, or as part of the northbridge
sometimes those also had video ram slots, on the motherboard

Gad damn it, we've gone backwards in motherboard design. Being able to upgrade the vram would be amazing.

We probably would need serial memory first.
Like memory that works like PCI-E, by having several lanes that accesses some fast as fuck SRAM cache instead of the memory itself etc..

Better question is why isn't RAM integrated with the CPU yet.

Why not combine CPU and GPU RAM like the PS4 or new Xbox?

>Why not combine CPU and GPU RAM like the PS4 or new Xbox?
Maybe because they have absolutely shitty graphics compared to any graphics card sold on the market nowadays. Good god you people are stupid.

>Maybe because they have absolutely shitty graphics compared to any graphics card sold on the market nowadays.
That's because they're APUs though. There's no way to have an APU that isn't shit and have reasonable temps. Can you not share RAM between CPU and GPU without having an APU?

Posting from 2030, GPU, CPU, RAM, VRAM and fast storage space are all on a single IC and motherboard only has connectors for IO and some VRMs

it would be very painful
sharing the same dimms would require some kind of sync between the cpu and gpu, that is, either one of them will need to handle both, or you'd need some controller in between both and the ram to handle the job
this alone would slow things down a lot, as one way or another, only one chip can access the ram at any one time
not to mention regular ram and vram aren't the same thing, so one of them will lose out depending on what kind you pick

Does using DIMMs add latency? It looks like the original 360 used shared memory with soldered RAM.

That's not idiot-proof. That's the only reason. The last thing a company wants is a bunch of morons cracking the gpu because they over tightened the gpu cooler and requesting a refund. Gpu tend to heat up more than a cpu and typically use a lot more vrms as well. Giving it an ihs would give it worse performance.

Linux is getting HMM for 4.16
This allows GPU and Mobo memory to be pooled together.

Combine this with OpenCL and Vulkan and you can render general computing tasks on the GPU.
Pretty snazzy imo.