R

techpowerup.com/237725/amd-navi-gpu-by-q3-2018-report

wccftech.com/amd-navi-gpu-launching-siggraph-2018-monolithic-mcm-die-yields-explored

>It's Infinity Fabric, but now on GPU
>It's LITERALLY "RyZen of GPUs"
>16GB HBM2 is the "mid tier"
>Third quarter of 2018
Why the FUCK anyone in their sane mind would buy ANY noVideo OR any /Ve/Ga/ now? Literally IRRELEVANT!

Other urls found in this thread:

anandtech.com/show/11913/nvidia-announces-drive-px-pegasus-at-gtc-europe-2017-feat-nextgen-gpus
research.nvidia.com/publication/2017-06_MCM-GPU:-Multi-Chip-Module-GPUs
wccftech.com/geforce-driver-387-92-optimizes-forza-7/amp/
twitter.com/SFWRedditVideos

>Threadripper of GPUs
FTFY

>q3-2018
>why the fuck would you buy something now

>source is tweakertown
>reported by wccfturd
Wow it's fucking nothing. Shitty rumor mill as usual.

Ah, now that Vega is confirmed shit as expected, the AMDfag fixates on the next JUST WAIT architecture (GCN 7.3), armed with information from such reputable sites as PajeetTech and TweakTown.

>wccftech future predictions
Besides that, it's about time both manufacturers start moving onto the modular, Ryzen-like design in their GPUs

>Besides that, it's about time both manufacturers start regressing back onto the modular, Core2Quad-like design in their GPUs
Fixed that for ya.

C2Q communicated using FSB.
It's day and night difference between it and Zen.

They should have done it way before.
The first time when Nvidia released a chip that was too expensive to sell to the consumers at all (was it 600 series?) was the time.
>hurr durr i am buttmad my favorite company is intellectually inferior and cant do mcm

>The first time when Nvidia released a chip that was too expensive to sell to the consumers at all (was it 600 series?) was the time.
It wasn't expensive, it was TSMC shitting itself as usual.

Nvidia start work in next volta.
anandtech.com/show/11913/nvidia-announces-drive-px-pegasus-at-gtc-europe-2017-feat-nextgen-gpus

When technology engineers can't make it its the fault of the design engineers.
Designets are in charge, when you are in charge a tall order is your fault not theirs, you should know what to expect from whom.
Engineers are supposed to design shit that is reflective of available technology.
>I designed a flying car
>its not my fault the technology and matetiaps to make it dont even exist

>Nvidia start work in next volta.
Learn English pajeet.
>anandtech.com/show/11913/nvidia-announces-drive-px-pegasus-at-gtc-europe-2017-feat-nextgen-gpus
Are they THIS desperate for attention?
The thing will pass the tapeout in two years at best and they are already teasing it?
Stop defending TSMC, they nodes pre-16FF were ass.
Both 40nm and 28HKMG launches were a disaster.
32nm cancellation was also shit.

Nvidiots should have known it then and knew what to expect of them.

The point stands.
Electronics dont have to be expensive tgey could make better and cheaper but they just wont unless AMD makes them.

>it's just speculation
I'm surprised MCM weren't the standard in GPUs. Just assumed.
Looking at this though:
research.nvidia.com/publication/2017-06_MCM-GPU:-Multi-Chip-Module-GPUs
I'm not a fan of the reduced bandwidth. It's pretty important to have a high bandwidth for things like games. The caches supposedly remidy this but I don't quite see how high bandwidth applications would benefit all that much when the cache is so small. Doesn't seem to address the core issue.
It'll be massive for HPC though.

>I'm surprised MCM weren't the standard in GPUs.
AMD was not in the best state last 10 years and it will take 10-15 years for nVidia to engineer not-broken low latency cache coherent bus without hiring AMD (read former DEC) dudes.
>The caches supposedly remidy this but I don't quite see how high bandwidth applications would benefit all that much when the cache is so small
And then you remember how small LDS/GDS is.
>Doesn't seem to address the core issue.
It does.
>It'll be massive for HPC though.
It's actually backwards, MCM makes programming extremely complicated.

>makes programming complicated
For HPC especially? How so?
>then you realize
Actually I realized I wasn't thinking right at all, I misread.
>AMD was not in the best state
Not sure how that's relevant. Why was MCM not an early step for parallel computing devices like GPUs?

>For HPC especially? How so?
It's NUMA. On fucking GPU. You already have to dance around GDS to schedule the whole GPU properly and now you have several (let's think 4) chiplets and interconenct layer.
>Why was MCM not an early step for parallel computing devices like GPUs?
Developing interconnect fast and wide enough to handle GPUs is a very difficult task.
It took 4 years for stacked DRAM to arrive in an actual product after AMD prototyped it back in 2011.

AMD started work on stacked dram in 2006 along with Amkor IIRC. A product was expected in 2013 (Tiran) but mechanical issues delayed it until 2015.

Who fucking cares

Call me when it can upscale images of my waifu as fast as Nvidia cards

I know.
They showed a working prototype of stacked DRAM back in 2011.
But yeah, when it comes to developing & patenting something fundamentally new, AMD is ahead of basically fucking everyone.
Whatever they've done to geometry pipeline in Vega is also bretty insane.
Use HIP then port waifu2x to OpenCL you dumb CIA nigger.

>It's Infinity Fabric, but now on GPU
I wonder how bad the micro-stuttering will be.

>Our potential GPU that will maybe be out in perhaps one year will possibly be good
Wow, how amazing.

Irrelevant in GPU parallel computing.

Nah, IBM does far more blue sky research. Shame nothing they do makes it to consumers though.

Why ryzen doesn't have a glued on dram?
Call it l4 cache.

APU with HBM? Yes, please.

IBM does a lot of things but nothing (besides OpenCAPI) becomes industry standards.
AMD gave us IMCs, NUMA, unified shaders, GDDR3 (that was ATi)/GDDR5 and HBM.

>q3-2018

Wait

>guy guy guy, just wait!

You wait, and something better comes out.
Unbelievable!

But nvidia do it...

I waiting for Vega and it is a turd. The 1080 was here 1.5 years ago.

>inb4 forza
wccftech.com/geforce-driver-387-92-optimizes-forza-7/amp/

Consumer Vega is yet to be finished.
Might as well buy an OEM system with WX9100 if you want finished product.

Too bad they shit the bed with Vega

I waited™ for a long time, and in the end I just ended up buying the 1080ti instead. Shit value, but this is what AMD made me do. If only Vega64 had been good, but it seems only the liquid cooled version was alright, and even then it pulled way too much power.
Not to mention that there STILL aren't any aftermarket Vega cards out on the market and the card launched a little over a month ago.

This new tech that they're putting into Navi looks promising, but so did Vega.

ps, don't buy a 1080ti, it's a terrible card.

The new tech in Vega is yet to work.
Navi is nothing more than "scalability" and possibly dedicated AI circuitry.

Ryzen CPUs have much less stutter than Intel CPUs.

Yeah, which is why I said
>Too bad they shit the bed with Vega

I doubt it's ever going to work at this point or make any sort of large difference in gaymes. What kind of company can't get such an important feature working for launch, and what kind of company can't even manage to get it working over a month after release?

Either they decided it's not worth it, or there is some larger issue that can't be fixed by a software update. Even if they do fix it a month or two from now it's still going to be too late, the reputation of Vega is already ruined.

>q3 2018
So q4 2020?

Poo in the GPU.

Keep buying top notch nshitia products.

HBM3
64GB
2TB/s

>hyping up another AMD gpu

>Doubling to quadrupling Vega's polygon throughput won't make a large difference in games
(You)

Seeing how they bombed Vega and how miners are destroying all the msrps, I'm not too confident.

Monolithic dies are dead, moar cores, modularity and hyperscalers are the future no matter how butthurt you are about it.

>What kind of company can't get such an important feature working for launch, and what kind of company can't even manage to get it working over a month after release?
The company that tries to kill Quadro above all.

>Nvidia is going turbo jew
>AMD is MIA
why the fuck would you buy anything right now?

>nividia glued together some of their shitty gpu
What will they re-use next? Their woodscrews?

As opposed to reality when i bought my 290x

>almost no hype, amd kept the "8970" under wraps completely
>Some early benchmarks show it making the titan its bitch
>it gets released
>benchmarks were 100% true
>i bought it for $600 when it was still the worlds fastest gpu, and remained at the top of benchmarks until the 980 ti came out almost 2 years later