>4GPUs on one Card
That's not that much
>4GPUs on one Card
That's not that much
Other urls found in this thread:
research.nvidia.com
twitter.com
better than one
BITCH WHERE'S THE HBM!?
Housefire confirmed
Wouldn't it be shit in practice due to the issues with having multiple GPUs in performance?
The system only sees one tho
that's why Vega only had two stacks instead of 4.
Say it with me
16k cores
32-64 GB video ram
still a bottlenecked frontend
Nice just another 20 month wait to upgrade if you want to use your freesync monitor
>Vega isn't worth the money from a fury x
W A I T
A
I
T
Nvidia is also gonna eventually do the same thing.
Mush cheaper to use high yeild chips than giant ones that fail a lot.
"We can't make strong single items to beat the competition so our products have to consist of many weaker items together"
-AMD (Forever)
™
ᴹ
Kinda sad if you ask me.
Like when they compared 2x rx480s to a gtx1080.
But failed to disclose the 480s lost in every other benchmark and cokes with the CFX latancy and stutters.
>4GPUs on one Card
There's no reason to be "hyped" by this at all. But this IS the future of GPUs, not just from AMD but Nvidia too. Yes, they will both be doing this, indeed the'll have to. It's like how CPUs were forced to go multi-core when it became clear that it wouldn't be possible to push the GHz of a single core much higher.
Yes, it will be shit in practice but it will still be an improvement. A 4xGPU board won't give you four times the performance of a board with just one, but it doesn't matter. You're still getting higher performance out of it compared to just one GPU even if it just performs like 2.5 using 4.
NVidia's already doing it, there's just no products like it out there yet.
>NVidia's already doing it, there's just no products like it out there yet.
They have a white paper on it. AMD has patents on MCM going back to like 2011 and they thought 3D stacking would happen by like 2017.
>AMD’s Next Generation Navi GPU Will Be Launching in August 2018 at SIGGRAPH
There is absolutely ZERO chance that Navi is going to be ready by August 2018. For fucks sakes it's October 2017 and Vega's drivers aren't even feature complete. Who the fuck is going to get a Navi driver even barely functional by August 2018?!
it's all very complicated, Vega was in production for two years and if Navi started it's production after Polaris, hit the market, then 2018 sounds about right.
I'm aware they have leapfrogging hardware dev teams, but they don't have leapfrogging driver teams, and THAT is the problem.
what is it¿
it work for cpu because the IO and stuff, what the point of this. other then to save budget
unless this can scale 100% per module, it already dead.
>wccftech
>a whole year from now
but i'll still wait because muh 7nm. I'm aiming for almost completely new build in 2019.
>t.leather jacket man
It's simple, they will just release FuryX drivers with Navi.
>NVidia's already doing it, there's just no products like it out there yet.
There have been PowerVR GPUs like this for years.
Let me know when they can get one chip working, then I will be impressed by four underclocked undervalued chips working in unison.
I didn't think about that.
How's 4 different chips gonna react to overclocking?
You would need 4 silicone lottery winners to have a overall good O.C. GPU
Those are Nvidia pictures.
NVLink 2.0 is 300GB/s
AMD has already solved that for you by pushing the silicon to the limit and auto overclocking at the driver level.
nvlink has nothing to do with this.
so what's the difference between a GPU and a DSP?
A DSP can be any number of things, a GPU is a rather specific thing.
NVLink is too slow, too high latency for that.
that's not much of an answer. i mean a DSP like the KeyStone2 from TI for example. seems like they are both just a large bank of FPUs
>what the point of this. other then to save budget
Yields you retarded fucknugget.
You can make one die only so big, 815mm^2 DOA100 is already pushing that.
Wouldn't they just release it with Vega drivers?
AMD spends more money on forum shills than products LUL
There are audio DSPs. There are image processing DSPs for camera sensors. A DSP is just one kind of ASIC that handles input from something else.
GPUs are a specific type of IC.
TOP KEK
That would be the stacked DRAM
muh yield, you either go big or go home
also you can still cut those big chip for mid range like nvidia doing
it will most likely just work like ryzen , wich is also MCM
this expert analysis brought to you from the 1.7% Institute
Threadripper/Epyc*
uhhh bigger dies = shit yields
small dies = alot of dies on the wafer = less losses by yield
see zen wich has pretty darn good yields
Sure, could be neat. Depends on price / performance versus Njewdia though.
each Ryzen has 2 CCX units , thus being a MCM
Threadripper has 4 and Epyc has 8
You're mistaken. There re two CCX per die. That image you posted is a single die.
Desketop Ryzen is not an MCM.
Cut them to produce gamer shit.
a better pic
those only yield for full chip, cut down had higher yeild not shown
You're still wasting wafers.
I know that feel.
>Zen+
>Nazi
>Zen2
>neetbux because all the Waiting™
PC consumer future is looking good
even better if AMD can asspull the drivers on Vega
>even better if AMD can asspull the drivers on Vega
It's not "if", it's "when".
vega is garbage. my fury x will hold out for navi for sure
Dude this 4gpu card will launch with the 6ghz 8 core phenom
INFINITY FABRIC
N
T
E
L
I
S
K
I
L
L
And the answer to your When? is Never Ever.
>pic from an Nvidia slide deck
Seems legit.
>if is kill
Whut?
Of course, Mr. Jensen.
Of course.
It isn't though
This shit is from nvidia
AMD has no papers on exactly MCM GPU.
Their paper on exascale computing describes chiplet APU which is as close as it gets.
Pretty cool but who cares? My RX470 4GB still capable of every game at 1080p 60FPS high/ultra settings.
Well, if the perfromance/W increases, we can get better mobile chips.
Speaking of which. Where the FUCK is Raven Ridge?
This is going to be like the 295x all over again that was punching above its wait for years, but more compatible.
Q4 , im waiting on it too
>august 2018
Too long. Can you feel it?
>getting hyped for Another Massive Disappointment
No thanks, I already learned my lesson.
What vega drivers?
>lets just make Crossfire happen on one card
What could possibly go wrong?
>first everything becomes integrated on a single piece of silicon, they spend years doing this
>then they split everything up again, they spend years doing this
Why?
because they reached the limit of one chip and now have to utilize multible
similar how single core is dead nowadays and you need at least 2 or 4
also cost
see
Block heater incoming
you are a fucking moron
It's not crossfire you dumb nigger, it behaves like a single gpu
Amdahl's Law
what's what amigo?
Thats a lie. If it reads 4 small gpus as one the it wouldn't be any different from for example a GPU having 4096 compute units. The work is split up anyway.
If they get this to work you'll see 100% scaling for each GPU added.
Besides it will force game developers to take advantage of new systems and we can see further progress in graphics.
...
...
>stacked DRAM
I call bullshit on that image
HBM is exactly that, stacked DRAM.