So are Intel going to make the same criticism of Nvidia as they did of AMD?

So are Intel going to make the same criticism of Nvidia as they did of AMD?

hexus.net/tech/news/graphics/107560-nvidia-evaluating-multi-chip-module-gpu-designs/

Other urls found in this thread:

intel.com/content/www/us/en/foundry/emib.html
twitter.com/SFWRedditVideos

This seems like a mediocre idea. Interconnect bandwidth might actually be an issue between modules with regards to memory access times.

It's going to have to have an internal bandwidth of at least 256GB/s if a module is supposed to have access to the full memory speed available to its neighbour.

No since Tigerlake-SP will also be MCM with EMIB.
nVIdia can't into NUMA anyway.

they got nvlink
amd got IF
intel can fuck itself

since nvidia making volta"the bigger dies" edition something is brewing on both sides

navi is a sure thing 2019, no idea what will happen after volta
as far as I remember early slides volta was supposed to be like navi MCMed

>they got nvlink
Meme point to point shit that's useless when it comes to interconnecting dies on one interposer.
>as far as I remember early slides volta
No.
Honestly Pascal was not even supposed to exist.
>intel can fuck itself
They have EMIB.

intel.com/content/www/us/en/foundry/emib.html

Intel already has superior technology, AYYMD can't even compete

>poor man's silicon interposer
LMAO.

Nvidia being 2 years ahead as always probably realizing there isn't much potential left in die shrinking alone.

>2 years ahead
>still only has whitepaper
?

InfinityFabric applies to both the interconnects on the interposer, and the interconnects between sockets.
The pins assigned to it can also be configured as PCIe or SATA in various different combinations.

Yes.
That's not the case for NVlink.
IF is a protocol.
NVlink is a physical interconnect with it's own signaling.

>They have EMIB.
it's not a protocol, IF is trivial to fab as i understand it
the important part is how they set up the data transfer itself

Volta is already 2 years ahead of Vega with Vega not being able to compete with Nvidia's 1 year old line up that shouldn't even exist.

They already look for future projects after Volta and its inevitable rebranded release somewhere down the road in 2019.

>NVLink is a wire based communications protocol

Literally the first line of the Wikipedia article.

Yes, there are issues, and they're not getting "scott free" out of it.
But the problems of making huge chips or getting better manufacturing processes are a lot bigger.

>Vega not being able to compete
will know on 18th, vegavan coming to Budapest
vegavan is a tell of something
either way win/win if it's terrible we laugh if it's good we buy

>Volta is already 2 years ahead of Vega
HAHAHAHAHAHAHAHA
Please stop JHH.

>NVlink is a physical interconnect with it's own signaling.
nvlink is basically just PCIe tuned for maximum performance and clocked out of this world. This is why an nvlink link will have length limitations.

It uses a custom PHYs.
It's not transport layer agnostic.

PCI-e is kinda meant to avoid length limitations and signal misalignment etc..
But it does require some very specific circuitry that might take quite a bit of space.

I don't see why not.
It's being used in nVidia's GPUs as well as IBM's POWER architecture. It's an inter-die communication protocol, so I don't see why it can't be applied as an intra-interposer inter-die communication protocol.

>PCI-e is kinda meant to avoid length limitations and signal misalignment etc..
Well that, and then there's also backwards compatibility.

>But it does require some very specific circuitry that might take quite a bit of space.
PCIe you mean? It comes in all sorts of form factors. What modern architecture doesn't support PCIe these days anyway?

Jesus Christ. EMIB has zero higher functions like Infinity Fabric. It's purely "point to point communication". It's like lobotomizing infinity fabric and claiming it's fixed.

you can't use pcie-like for MCM GPU
it needs something new, drivers is a huge issue
they would need to rewrite whole thing either way my bet
or will hardware adapt itself for existing drivers?
no idea how they will pull it off, for compute tasks it's not that hard to do, framework is already there, but for graphical loads no idea

POWER9 uses BlueLink only for nVidia GPUs, connecting the rest with PCI-E and/or CAPI.
>It's an inter-die communication protocol
No.
It's PCI-E on steroids.

PCI-E is made of several serial, differential signaling lanes.
The data is encoded by using two tracks, with one containing positive voltage and the other negative, and they swap around when its an "1".
It's a lot more resilient because voltage variations and most noise can't touch it, but it does require the complex semi analog circuitry that generates and decodes this kind of signal, so it's probably a biggish circuit on each module.
But it might worth the trouble.

There seems to be a few people here with actual hardware knowledge. What the fuck are you still doing on Sup Forums? This place is all about consumerism and generals with discord channels.

Sometimes you can still talk about technology.
Yes, you can even talk about GPUs here.

It's fun to occasionally correct misinformation spouted here and watch overconfident second semester college student crumble. The Dunning-Kreuger on this board is astonishing.

nvidia doesn't remotely threaten any of their markets so why would they?

I'll take HPC for 200.

what if i devolve this thread into Sup Forumsitics?

AVX basically exists so Intel doesn't get fucked raw by Nvidia.

Then explain why Intel cancelled IDF for all future after NVIDIA moved GTC up early in order to announce Volta.

>815mm^2 die
Oh boy 1 known good die per wafer?
I'd tell you to hang yourself.

>Interconnect bandwidth might actually be an issue

Your point is totally valid. That being said, Crossfire is (was?) a thing and performance did increase running two 7850's, for example. But you didn't get twice the performance with two cards, it's more like 1.4x one card.

If a 4 module card just gave you the performance of say 2.5x one module then you're still looking at higher performance.

This does make me wonder exactly what kind of limitations they are running into. Asking the simple question "why would they do this" doesn't have that many potential answers beyond "they are running into a wall increasing the performance of a single GPU module design".

I seriously doubt they will be able to compete on high-end performance. Not sure it will be competitive on price/performance either, my guess is that they will try to price it slightly below a 1070.

Intel is coming out with a new arch soon anyway
With i9 they judt throw out the rest of the old silicon they still have.
Intel has like 10x the manpower and actually builds processor's themselves instead of letting pajeets do it for them. I bet half the technology in ryzen is stolen directly from core series processor's via reverse engineering. AMD just isn't good enough. I'm not saying AMD should go bankrupt and I'm not even a Intel fanboy. Actually I don't even have a notebook or PC anymore , only tablet and smartphone. Everyone who thinks AMD can beat Intel is a daydreamer. Just look at AVX512, AMD doesn't even come close.

>I seriously doubt they can compete in high-end
Oh, people said the same before R300 launched.
History repeats itself.

>reverse engineering a chip with billions of transistors spread in several, SEVERAL layers.
I think it's just easier that they did the thing themselves, as those chips are designed with pseudo programming languages and a lot of macroing.
But the main change here is that AMD learned that you can't rely on programmers parallelizing code and not depending on the FPU performance.
Bulldozer would be an absolute demolisher if the code were perfectly parallelized and didn't relied much on FPU, as it was designed to shove as much cores as possible on the smallest footprint, with the sacrifice of dies sharing the FPU.

Now they saw this is stupid and created something more common.
It's on the same vein of stupidity of pentium 4 and it's HUGE pipeline, and wacky dreams of clocking it to 15 Ghz.

>as it was designed to shove as much cores as possible on the smallest footprint,

CMT's only real strength is getting lots into a limited die area but fundamentally CMT is all about sacrificing individual thread performance for increased overall throughput which isn't an ideal option for desktop and server deployments. For mobile and embedded solutions CMT can work some serious magic.

I imagine you can get away with crappier FPUs on mobile.