Why do we still connect graphic's cards via PCIe? Why not recreate the standard motherboard but with dual sockets...

Why do we still connect graphic's cards via PCIe? Why not recreate the standard motherboard but with dual sockets, one for CPU and one for a GPU chip?

We are seeing huge improvements in integrated graphics, but these hinder CPU performance and vice-versa. Focusing all the graphical power on a single socketed chip would allow for smaller and thiner cases with the advantage of being able to cool components better with CPU coolers for both the CPU and GPU.

There's no excuse for this to not be happening, especially now that HBM2 is being worked into GPUs. We need to reinvent the motherboard

Other urls found in this thread:

eurocom.com/ec/vgas(1)ec
madshrimps.be/articles/article/1000953/HIS-RX-460-Slim-iCooler-OC-2GB-Video-Card-Review/0#axzz4YFvUrHX0
newegg.com/Product/Product.aspx?Item=N82E16814150788
twitter.com/SFWRedditVideos

There's more on a graphics card than just the GPU. Having a GPU socket on the motherboard would mean that none of those components can be upgraded without a new motherboard. We'd be taking a step backwards.

Could you not have GDDR slots too alongside conventional ram?

Wouldn't that make it more upgradeable?
Think of how much better a gtx 670 would have aged if it had 4GB vram instead of 2. That's why the 7950 started to pull ahead after a few years.

Well sure but you could just as easily put GDDR slots on an expansion card.

Why should we recreate something that works perfectly fine for everyone except you for some reason?

>Having a GPU socket on the motherboard would mean that none of those components can be upgraded without a new motherboard
why not? You can upgrade CPUs with integrated graphics without updating the motherboard.
sure, every few generations the socket would change, but it would still be upgradable within a certain range

Why in the world would you want that? PCIe is already way faster than graphics cards currently need. Why would you want to lose the excellent existing compatibility in exchange for... What? What would you even conceivably gain by mounting the components to the motherboard?

* Plenty fast
* Universal, easy to swap out (only concern is card length with regards to the case & cooling)
* The GPU is in a good position for air flow

I don't get it.

Meanwhile you can currently take a brand new graphics card and install it in a 10+ year old computer. Again, we'd be taking a step backwards.

>takes way too much space
>heats up like a motherfucker
>wastes PCIe lanes
>"works perfectly"

The GPU would still be "wasting" PCIe lanes if it was socketed on the motherboard, it just wouldn't be using a PCIe slot to utilize those lanes.

Now you'll need a motherboard with a specific type of socket for your CPU and a specific type for your GPU. Manufacturers would suddenly have to support all kinds of random combos and you'd have to keep upgrading your shit all the time.

>takes way too much space
No it doesn't.
>heats up like a motherfucker
No it doesn't.
>wastes PCIe lanes
No it doesn't.

So what's the problem again?

fair point, I didn't think of that
although since AMD owns ATI and Intel is according to rumours starting to produce their own GPUs, I can see this being good for business for both companies

ah, the classic "just deny everything" method, good one

The CPU should be an expansion card. Everything should be an expansion card. I want a motherboard with just PCIe lanes.

The last time Intel tried to make a GPU it turned into the Xeon Phi, are they really trying again?

It doesn't take up way too much space. That's a function of your case. The actual size of a motherboard and its components is small. The choice of case is not a PCIe problem.
It doesn't heat up like a motherfucker unless you have insufficient cooling and airflow. Again, this is not an issue with PCIe. It's an issue with you.
It doesn't waste PCIe lanes any more than hard drive cages waste SSD space. Again, this is a problem with you.

So tell me again, why should the standard that works fine for everyone else be changed?

probably less than 16 though since CPUs+iGPUs take advantage of 16 lanes and they're effectively 2 in 1 components

Even reducing the number of lanes going to the pcie port would make it enough to power a few ssds

>it doesn't take too much space
Yes it does. HBM has helped reduce that, but GDDR5 cards are still huuuuuuuge

you sound like an Idea Man, the sort that hasn't spent one hour studying the subject but has now appointed yourself to be in charge of the situation and telling other people how to do things

perhaps you should run for president

Your personal opinion on whether graphics cards are too large is not a problem with PCIe.

So tell me again, why should the standard that works fine for everyone else be changed?

I'm not defending the extintion of pcie, I'm defending the extintion of graphics cards

>it's your case
any case that takes ANY current GPU is at least 2 times bigger than a GPUless case. You need airflow if you have a GPU, you know that, right? And how is 60°C not hot? It might be well below the limit for GPUs but it's still hot and there's no reason to have a big, idle component at that temperature inside your case. If it was a single socketed chip you could have an all-in-one cooler paired with the CPU and it would take up way less space than a typical huge ass GPU that is almost the size of the motherboard itself, all while allowing for a smaller case. Even ITX GPUs are huge in comparison to the motherboard.

That's not me you're replying to, but tell me, why does your personal opinion on graphic's cards sizes matter and mine doesn't?

I've got a file server (q6700, 8gb ram, 12Tb of storage across 6 drives). What can I do with it as a project?

>case with a GPU is necessarily larger than a case without a GPU
Are you kidding me?! That can't be right!
>How is 60°C not hot
It isn't hot by any standard that means anything. It's not in any danger of being damaged by running at the temperature.
>idle component
If your GPU is idling at 60°C it's because you have bad airflow and live in heat and dust.
>all in one cooler
Radiators are louder and necessarily much hotter than air cooler heatsinks... unless they're large, which is counterproductive to your whining.
>smaller cases
Your personal opinion on what is too big is irrelevant to PCIe.

So tell me again, why should the standard that works fine for everyone else be changed?

all in one as in all in one air cooler, which would be easily achievable with 2 chips on the motherboard

good for you for kepping up with your autistic answers though

>all in one air cooler for both GPU and CPU
So you want to replace "large" cases and "large" graphics card housings with a large heatsink and many fans? Great idea. I wonder why no one else has tried to do this. Seems like it would make you a millionaire easy.

>why should the standard that works fine for everyone else be changed?
>what is mATX, mITX, mSTX, NUCs, etc

I mean, if full ATX worked fine for everyone else why would we ever change, right?!

you're an idiot

I wasn't aware that PCIe connections differed between form factors. Seems strange for a standard to not be standardized.

Oh wait.

There's been buzz around mini-desktop motherboards that can use the MXM gpu card standard used in laptops and imacs.

The surface area of such a cooler would be much larger, which would also allow for a much larger fan. There are low profile CPU coolers capable of dissapating heat from 100W CPUs, why do you think you'd need large heatsinks and many fans to dissipate 200 something, especially with a much bigger surface area?

like I said previously, I'm not against PCIe ports, you autist

You don't understand how heat works nor how much heat a single chip outputs.

It's coming. GPUs are becoming too fuck-huge and cumbersome. But you are going to need something easier to work with than a GPU slot identical to the CPU slot because the process of installing a CPU is not idiot proof.

It's called the ASRock DeskMini GTX/RX
Instead of providing PCIe x16 connectors, it uses an MXM board connector, usually used in laptops for high end GPU's.
This allows you to mount various GPU solutions into the case without modifying the cooling solution, and optionally connecting an extra power line to the MXM board.

The reason that PCIe x16 is so prevalent, is because it was standardized for many different devices, not just GPU, and it works really, really well.

...

you're literally the one who doesn't understand how heat works
Did you know that one major sign of autism is the angry refusal of anything new?

>GPUs are becoming too fuck-huge and cumbersome
(Ignoring that this isn't entirely true, see the R9 Fury X, RX 480 and R9 Nano).
That's all cooler. If you put the GPU on the motherboard that cooling hardware isn't going to magically vanish, it's just going to take a different form.

It would have inherent latency if you decouple the gddr from the gpu. You would need a really fuckin fast bus from the external gddr to the gpu to have the same performance.

Also, since 2.5" drives are being deprecated.
This is the sort of storage arrangement that you can expect in the future.

I will continue to enjoy the current standard for another few decades while you whine about things you don't understand. I hope in a few years, when your age begins with a 2, that you will come to understand these things better.

that is sweet, I hope it takes off

but are brands really willing to put out MXM GPUs? It seems like they would lose a lot of money from laptop sales

MXM GPUs already exist

We already have to replace motherboards far too often. Having parts of the GPU on the motherboard would pretty much ensure every single upgrade you ever make would need a new one. You'd be tying power delivery, the number of cards used, video output and god knows what else to the motherboard.

>a-another internet battle won *tips fedora*

I know, I meant put out for sale
as far as I know the only way to get one is to remove it from a laptop

>takes way too much space
That I'll give you.

>heats up like a motherfucker
And how the fuck is merging it into the motherboard and then having to strap a giant aftermarket cooler to it going to fix that?

>wastes PCIe lanes
Which it still will...

I always wanted to go other other way. I always wanted to see a CPU in a PCIE slot (I know there's lots of reasons not to) but it would be hilarious to not have to worry about AMD mobo vs Intel or if amd64 vs ia32 vs ARM and do everything PCIe transport

>no
>no
>what the fuck else do 99% of consumers use pcie lanes for
>works fine for me

You can buy new MXM GPUs
eurocom.com/ec/vgas(1)ec

>architecture agnostic motherboard
That would be awesome

The thing is, if they had kept going at it the phi could have murdered nvidia and amd
Its still one of the most powerful cards for compute tasks on the market
Makes teslas look like childs toys, and even amd's firepro w9100 was beat out by it

>uses an image of a triple slot card
Who the fuck uses those? Of all the builds I've done and builds I've encountered (friends, classmates, etc.), I've seen one of those things once a grand total of once. Whenever I've been pricing components it has always been a fucking stupid idea to buy one.

a dual slot card is still big, and there hasn't been a real single slot card in years. Today's single slot cards effectively always occupy two slots because of the cooler.

I'd be cool with this too
>dual pcie config with a GPU and a CPU
>AMD and Intel both compatible

>make a port specifically for graphics cards
user, this isnt the 1980s. We dont want billions of super niche ports on our computers anymore.
If anything having nonstandardized cpu sockets is a crime. picev2 needs to be capable of accommodating cpus and ram. a mobo should have just a fuckton of slots, and if I want 10 cpus, then bygod Ill have 10 cpus

You have no idea how much I want one of these.

what card is that? I need this.

>gaymur problems

>filename

Look at the filename and guess you dumb fuck. Its an RX 460

As far as I can tell, it's not even a real product. I can't find any product listing for it other than the HIS website.

>Think of how much better a gtx 670 would have aged if it had 4GB vram instead of 2. That's why the 7950 started to pull ahead after a few years.
It really isn't.
7950 clocked much better and had better per-shader performance, along with the gains from driver maturation.
There were 4GB versions of the 670 and it still kind of aged like cheese.
Source: I had a 670.

it's like it can't be made either, look at quadros all being single slot cards

hopefully there'll be some single slot Vega options

You mean aged like milk. Cheese ages very well in some cases.

>it's like it can't be made either
it's not*

OH GOD IT'S A METAL SHROUD.
I WANT THIS SO FUCKING MUCH WHY U NO PRODUCTION RUN HIS?????!!!

So excited I forgot the picture.
It's an aluminium shroud.
At least one of these cards exists.

madshrimps.be/articles/article/1000953/HIS-RX-460-Slim-iCooler-OC-2GB-Video-Card-Review/0#axzz4YFvUrHX0

>put something in a usb socket
>its wasting a usb socket!
this is your stupid logic

>
>>takes way too much space
Get a smaller card (r9 nano etc. )
>>heats up like a motherfucker
If you consider 40 °C at idle high, I'm not sure what to tell you.
>>wastes PCIe lanes
If you don't use it, then its a waste but if you're a professional using programs such as DaVinci resolve, it won't be a waste
>>"works perfectly"
So what's the problem again user?

>Could you not have GDDR slots too alongside conventional ram?

No. It would:
1) be too space inefficient. you'd need 4 64b datapath DIMMs to match a 256b card aggregate bus width
2) have lower performance. GPUs get high clock rates on memory interfaces since the pcb traces are of minimum length and signals don't need to cross socket/slot boundaries.
3) limit bandwidth based on slot count. nobody would want to half to replace a mobo to get higher GPU performance

Are you retarded? Single slot quadro and firepro cards are put into giant racks the size of a refrigerator at minimum with dedicated cooling systems that push air through the entire enclosure. They don't need large coolers because an HVAC-tier ventilation system takes care of it all for an entire room full of them.

The cooler almost inevitably covers up a PCIe x1 slot.
If I want an HTPC with a good GPU, Ethernet NIC, and maybe a PCIe based WiFi card or USB hub, I would need a single width video card, and would be forced to watercool it with a modified bracket.

My xfx 7870 DD had an aluminium shroud. It's not that big a deal.

Or you could just go for a different form factor motherboard.

The RX 460 is not a premium card. Nobody would expect an aluminium shroud on one.

which would defeat the purpose of wanting to go smaller

This thread is not about bigger form factors.
This thread is about as small as you can get, and for that, a single slot GPU would be fantastic.
Also, they're just cute.
>CUTE!!!~

Im not so autistic that random strings of numbers are associated with various products.

$700 for a gtx 970?
user...

Idk what you're talking about user, my gtx 680 4GB aged really well for a card its age.

Still matching my friends r9 280x.

I'm just pointing out that MXM GPUs are already available for purchase. I'm sure if MXM on the desktop catches on prices will drop

Ok heres a 7750. Aluminium shroud and single slot.

I hope so
t. M18x owner

>boner

Good lord that is a sexy 3D Graphics Accelerator.

Because it sacrifices backwards compatibility. Imagine if GPU upgrades were like CPU upgrades, where you would have to toss out your old motherboard and old RAM (and in this case old CPU as well) in order to upgrade your GPU.

Likewise, it allows customers from previous gens to purchase the latest products.
Another important fact is it constrains NVIDIA and AMD to a specific standard they are forced to comply with, and not restrict themselves into proprietary models. Imagine if there were AMD motherboards designed to work with AMD GPUs, AMD motherboards designed to work with NVIDIA GPUs, Intel motherboards designed to work with AMD GPUs, and Intel motherboards designed to work with NVIDIA GPUs.

heres a xfx single slot 460
newegg.com/Product/Product.aspx?Item=N82E16814150788

there's also a fanless

that's also the last released graphic's card that is true single slot

at least the format died with a sexy card,not a bad way to go

If only the top plate wasn't red. I used to own one, I sanded it down to the aluminum finish.

They used the same basic design from the hd 7000 series through to the 300 series. 200 series looked the best IMO.

Its a shame these coolers all have a reputation for being awfully designed pieces of shit in terms of functionality.

Those single slot blowers fucking hiss like a mother fucker. I remember my 2600Pro was one of those and boy was that thing fucking noisy at idle. Don't even get me started when I put some load on it.

Want.
WantWantWantWantWant.

The post directly above yours begs to differ.

Honestly I had no idea the reused this design for the lower end cards in the 400 series. I though everyone was going full on Lamborghini/RGB this generation

I mainly want one as a VGApassthrough boot primary for that tasty amdgpu support in a small case.
It wouldn't ever get any real load on it.
As far as I know, there aren't any GCN1.2/1.3 cards in single slot form factor. All the 240/250 cards are first gen GCN, which has really bad amdgpu support.

Rx 460 posted twice above you is Polaris 11, which is GCN 1.3.

It seems like the naming scheme changed.
I meant second and third gen, which would be 1.1/1.2 I guess??
I'm currently on a 380X, which I remember being 1.3, but I could be wrong.

You go based on the arch of the gpu itself, not the name-scheme.

Pitcairn (7850, 7870, 270, 270x, 370, 370) and tahiti (7950, 7970, 280, 280x) are GCN 1.0.

Hawaii (290, 290x, 390, 390x) is GCN 1.1

Tonga (285, 380x) and fiji (fury cards) are GCN 1.2

Polaris (460-480) is GCN 1.3

AMD referes to each generation as GCN 1, GCN 2, etc but the chips have unoficially been called 1.0-1.3 because they haven't actually changed a whole lot at their core.

you sound like a cynical loser

you are completely, utterly full of shit. That 280X matches performance with a 780 today.

Wikipedia has no reference to the 1.X naming scheme.
It now just refers to "Xxx Generation GCN"

>AMD referes to each generation as GCN 1, GCN 2, etc but the chips have unoficially been called 1.0-1.3 because they haven't actually changed a whole lot at their core

Nearly every review article will use the GCN 1.x moniker over GCN gen x.