>It's Infinity Fabric, but now on GPU >It's LITERALLY "RyZen of GPUs" >16GB HBM2 is the "mid tier" >Third quarter of 2018 Why the FUCK anyone in their sane mind would buy ANY noVideo OR any /Ve/Ga/ now? Literally IRRELEVANT!
>q3-2018 >why the fuck would you buy something now
Colton Sanchez
>source is tweakertown >reported by wccfturd Wow it's fucking nothing. Shitty rumor mill as usual.
Eli Nguyen
Ah, now that Vega is confirmed shit as expected, the AMDfag fixates on the next JUST WAIT architecture (GCN 7.3), armed with information from such reputable sites as PajeetTech and TweakTown.
Kayden Wilson
>wccftech future predictions Besides that, it's about time both manufacturers start moving onto the modular, Ryzen-like design in their GPUs
Nicholas Lewis
>Besides that, it's about time both manufacturers start regressing back onto the modular, Core2Quad-like design in their GPUs Fixed that for ya.
Jace Morris
C2Q communicated using FSB. It's day and night difference between it and Zen.
Jaxon Sullivan
They should have done it way before. The first time when Nvidia released a chip that was too expensive to sell to the consumers at all (was it 600 series?) was the time. >hurr durr i am buttmad my favorite company is intellectually inferior and cant do mcm
Thomas Sullivan
>The first time when Nvidia released a chip that was too expensive to sell to the consumers at all (was it 600 series?) was the time. It wasn't expensive, it was TSMC shitting itself as usual.
When technology engineers can't make it its the fault of the design engineers. Designets are in charge, when you are in charge a tall order is your fault not theirs, you should know what to expect from whom. Engineers are supposed to design shit that is reflective of available technology. >I designed a flying car >its not my fault the technology and matetiaps to make it dont even exist
Cameron Wilson
>Nvidia start work in next volta. Learn English pajeet. >anandtech.com/show/11913/nvidia-announces-drive-px-pegasus-at-gtc-europe-2017-feat-nextgen-gpus Are they THIS desperate for attention? The thing will pass the tapeout in two years at best and they are already teasing it? Stop defending TSMC, they nodes pre-16FF were ass. Both 40nm and 28HKMG launches were a disaster. 32nm cancellation was also shit.
David Flores
Nvidiots should have known it then and knew what to expect of them.
The point stands. Electronics dont have to be expensive tgey could make better and cheaper but they just wont unless AMD makes them.
Landon Morales
>it's just speculation I'm surprised MCM weren't the standard in GPUs. Just assumed. Looking at this though: research.nvidia.com/publication/2017-06_MCM-GPU:-Multi-Chip-Module-GPUs I'm not a fan of the reduced bandwidth. It's pretty important to have a high bandwidth for things like games. The caches supposedly remidy this but I don't quite see how high bandwidth applications would benefit all that much when the cache is so small. Doesn't seem to address the core issue. It'll be massive for HPC though.
Austin Russell
>I'm surprised MCM weren't the standard in GPUs. AMD was not in the best state last 10 years and it will take 10-15 years for nVidia to engineer not-broken low latency cache coherent bus without hiring AMD (read former DEC) dudes. >The caches supposedly remidy this but I don't quite see how high bandwidth applications would benefit all that much when the cache is so small And then you remember how small LDS/GDS is. >Doesn't seem to address the core issue. It does. >It'll be massive for HPC though. It's actually backwards, MCM makes programming extremely complicated.
Jonathan Phillips
>makes programming complicated For HPC especially? How so? >then you realize Actually I realized I wasn't thinking right at all, I misread. >AMD was not in the best state Not sure how that's relevant. Why was MCM not an early step for parallel computing devices like GPUs?
Sebastian Wood
>For HPC especially? How so? It's NUMA. On fucking GPU. You already have to dance around GDS to schedule the whole GPU properly and now you have several (let's think 4) chiplets and interconenct layer. >Why was MCM not an early step for parallel computing devices like GPUs? Developing interconnect fast and wide enough to handle GPUs is a very difficult task. It took 4 years for stacked DRAM to arrive in an actual product after AMD prototyped it back in 2011.
Carson Smith
AMD started work on stacked dram in 2006 along with Amkor IIRC. A product was expected in 2013 (Tiran) but mechanical issues delayed it until 2015.
Gabriel Clark
Who fucking cares
Call me when it can upscale images of my waifu as fast as Nvidia cards
Zachary Fisher
I know. They showed a working prototype of stacked DRAM back in 2011. But yeah, when it comes to developing & patenting something fundamentally new, AMD is ahead of basically fucking everyone. Whatever they've done to geometry pipeline in Vega is also bretty insane. Use HIP then port waifu2x to OpenCL you dumb CIA nigger.
David James
>It's Infinity Fabric, but now on GPU I wonder how bad the micro-stuttering will be.
Matthew Lewis
>Our potential GPU that will maybe be out in perhaps one year will possibly be good Wow, how amazing.
Zachary White
Irrelevant in GPU parallel computing.
Brody Wilson
Nah, IBM does far more blue sky research. Shame nothing they do makes it to consumers though.
Gavin Ward
Why ryzen doesn't have a glued on dram? Call it l4 cache.
Caleb Hughes
APU with HBM? Yes, please.
Ian Perry
IBM does a lot of things but nothing (besides OpenCAPI) becomes industry standards. AMD gave us IMCs, NUMA, unified shaders, GDDR3 (that was ATi)/GDDR5 and HBM.
Alexander Brown
>q3-2018
Wait
Daniel Peterson
>guy guy guy, just wait!
Anthony Sanders
You wait, and something better comes out. Unbelievable!
Liam Turner
But nvidia do it...
Bentley Gutierrez
I waiting for Vega and it is a turd. The 1080 was here 1.5 years ago.
Consumer Vega is yet to be finished. Might as well buy an OEM system with WX9100 if you want finished product.
John Clark
Too bad they shit the bed with Vega
I waited™ for a long time, and in the end I just ended up buying the 1080ti instead. Shit value, but this is what AMD made me do. If only Vega64 had been good, but it seems only the liquid cooled version was alright, and even then it pulled way too much power. Not to mention that there STILL aren't any aftermarket Vega cards out on the market and the card launched a little over a month ago.
This new tech that they're putting into Navi looks promising, but so did Vega.
ps, don't buy a 1080ti, it's a terrible card.
Carson Jones
The new tech in Vega is yet to work. Navi is nothing more than "scalability" and possibly dedicated AI circuitry.
Cameron Martinez
Ryzen CPUs have much less stutter than Intel CPUs.
Wyatt Roberts
Yeah, which is why I said >Too bad they shit the bed with Vega
I doubt it's ever going to work at this point or make any sort of large difference in gaymes. What kind of company can't get such an important feature working for launch, and what kind of company can't even manage to get it working over a month after release?
Either they decided it's not worth it, or there is some larger issue that can't be fixed by a software update. Even if they do fix it a month or two from now it's still going to be too late, the reputation of Vega is already ruined.
Aiden Nelson
>q3 2018 So q4 2020?
Ethan Scott
Poo in the GPU.
Nicholas Williams
Keep buying top notch nshitia products.
Robert Myers
HBM3 64GB 2TB/s
Jeremiah Rivera
>hyping up another AMD gpu
Juan Phillips
>Doubling to quadrupling Vega's polygon throughput won't make a large difference in games (You)
Dylan Gomez
Seeing how they bombed Vega and how miners are destroying all the msrps, I'm not too confident.
Ayden Sanders
Monolithic dies are dead, moar cores, modularity and hyperscalers are the future no matter how butthurt you are about it.
Asher Price
>What kind of company can't get such an important feature working for launch, and what kind of company can't even manage to get it working over a month after release? The company that tries to kill Quadro above all.
Jackson Parker
>Nvidia is going turbo jew >AMD is MIA why the fuck would you buy anything right now?
Evan Morgan
>nividia glued together some of their shitty gpu What will they re-use next? Their woodscrews?
Juan Butler
As opposed to reality when i bought my 290x
>almost no hype, amd kept the "8970" under wraps completely >Some early benchmarks show it making the titan its bitch >it gets released >benchmarks were 100% true >i bought it for $600 when it was still the worlds fastest gpu, and remained at the top of benchmarks until the 980 ti came out almost 2 years later