NVIDIA Volta with GDDR6 in early 2018

videocardz.com/68948/skhynix-gddr6-for-high-end-graphics-card-in-early-2018

>also wait™ vega

Other urls found in this thread:

vr-zone.com/articles/gddr6-memory-coming-in-2014/16359.html
zotac.com/my/product/graphics_card/zotac-geforce-gtx-1080-mini
msi.com/Graphics-card/GeForce-GTX-1070-AERO-ITX-8G-OC.html
hothardware.com/news/intel-kaby-lake-g-series-leaks-with-heterogenous-multi-die-design
eteknix.com/amd-unveils-vega-based-radeon-instinct-mi25-gpu/
twitter.com/SFWRedditImages

Wait GDDR6 is a thing?

I thought GDDR5X was made because there was going to be a big gap before 6 so they pushed a high spec 5 instead.

The flagship will get HBM2, not sure on 4096 bit

"Early"

NOOOOOOOOO

And another paper launch? God I love novidia.

that's rich coming from amdrone still waiting for vega

But I have novideo card.

Paper launch?

bbut HBM2 was meant to GOAT..????! it was developed by highly skilled engineers at AMD, wtf Nvidia???!

Pascal was launched eons ago with GP100.

like it matters anyway Nvidia is just going to dump marginaly better tech because why not milk the monopoly like big pharma or google or amazon does, shits like the style now.

It is GOAT. It's just costly.

so AMD is gonna be TWO fucking generations behind nvidia
what a time to be alive

vega was launched december 2016 for enterprise

we have no idea what vega is yet

And MI25 is already here. Much much faster than GP100.

It's Maxwell from AMD.

Citation

>wait for vega while nvidia is pushing out gddr6 and double clock speeds
yeah dude, I am totally waiting for vega

It's Sup Forums so fucking google it.

>at compute

AMD always was faster at compute.

Bullshit

kill yourself AMDead

>vega 2017 HBM 2
>volta 2018 gddr6

yup, just wait for volta

I need a citation and not your AMD cringe leak please

>vega will be slower than 1070/1080/1080Ti
>Nvidia will then release Volta which will be faster again
>AMD will fall further and further behind
Step up your game AMD, you're stagnating GPU tech by not being competitive.

>I can't count
Okay, Rajeesh.

B-b-but 2xvega pro duo :^)

Ramesh pls.

>will be
>will be
>will be

hey, I can do that too

I'm the england queen

I need the source of what you said and fuck off with out Tflop meme from amd

>256bit GDDR6 card will have same amount of memory bandwidth as HBM2 2 stack AYYMD HOUSEFIRES with significantly lower cost
>384bit GDDR6 card will have more bandwidth than AYYMD HOUSEFIRES

AYYMD IS FINISHED & BANKRUPT

AYYMDPOORFAGS CONFIRMED ON SUICIDE WATCH

>Rajeesh
stop projecting poojeet

>And MI25 is already here. Much much faster than GP100
>t. amdrone

Stop

>pure compute power is a meme
Are you braindead? I mean you're an Indian, but how can one be so retarded?

>The Quadro GP100 is also the first instance of many technologies coming close to the consumer front in the form of HBM2, NVLINK and over 20 TFLOPs

>Radeon Instinct MI25 - With a Vega GPU which is listed at 25 TFLOPS

enterprise specs never lie.

All the Pajeets.

GP100 is 9.7tflops FP32 afaik.

Where are the test? Where is the (((MI125)))? These numbers from AMD mean NOTHING.

>MI25 Announced – Features 12.5 TFLOPs of FP32 Compute Performance.

those numbers from nvidia mean nothing.
I can do that too.

>enterprise specs mean nothing
End yourself, Rajeet.

4k introduced?

>Lying about things you don't know

>specs mean nothing
Okay, Ganesh.

>The GP100 GPU's based on Pascal architecture has a performance of 10.6 Tflops of FP32 performance and 21.2 TFLops of FP16 performance.

he wasn't that far off

We have tests and benchmarks of Quadros and Teslas with the GPxxx but nothing from AMD. What is the price of the '''MI25''''?

>compute numbers mean nothing for compute cards
Okay.

you are retarded, enterprise specs never lie because if they do company gets fucked for false advertisement by corporation lawsuits
enterprise market is not forgiving, you can't ride your customers on 3.5Gb

You guys are missing the important thing here

Will it be called the 11xx series or something retarded like 20xx?

No one cares about the name.

Bounty bars are delicious and I miss them like heck in Japan.

Snickers are fairly boring but about the only candy bar they have. A few places have Kitkat Chunkys, but they're also nothing next to a bounty bar.

Which reminds me, there was supposed to be some recipe to making your own bounty bars. I should look that back up.

>Volta
>Pascal with GDDR6

mark my words.

vr-zone.com/articles/gddr6-memory-coming-in-2014/16359.html

I fucking care about the name if they make it the 20xx ill have to change my computers name to the TARDISt200 and that is a HUGE version jump

Is HBM just too difficult to scale up in comparison to GDDR?

Should it just be limited to constrained environments like laptops, tablets, etc where there is no upgrade path so there is an actual benefit in saving space over everything else?

No, it's just slapping more GDDR is cheaper.

16gbs gddr6 vs 512gbs+ hmb2 stack
Ye nah

Slapping more GDDR means more traces though.

Aaaaand that's precisely why NVIDIA gives zero fucks. Its all up to vendors. And slapping more traces is still easier than working with interposers.

GPUs are already 6/8 layered PCB with or without HBM

You still need space to get the traces from the GPU pins to the other layers and then bring them back. It would be an absolute pain in the ass to try and get like 16 chips connected to a single GPU, which is why they limit to around 8.
Most likely this Volta card will have 8 2GB, nothing really new in regard to the number of chips or the traces required.

Add more layers, it really isn't as hard as you think, PCB cost only goes up with number of layers not the traces.

Navi is hbm3

>needs 2 fucking phases just to modulate the GDDR*
>uses 3 times the power of HBM*
>prevents SFF ala the Nano
>higher latency


No thanks, HBM2 on $600+ GPUs, keep your GDDR on mid range cards

> wait™
Yeah nah
I'll just pick whatever next 120W card offers significant features improvement over my current card. 150W is pushing it in my tiny brick with bad airflow, while 120W can work fine with case fans on minimum.
GTX960 raw performance is fine enough for 1920x1200 anyway.

Are you stupid or what?

zotac.com/my/product/graphics_card/zotac-geforce-gtx-1080-mini

msi.com/Graphics-card/GeForce-GTX-1070-AERO-ITX-8G-OC.html

>GP104
>still bigger than the Nano

Only HBM allows to make truly smOll high-end cards.

That's the point?

GDDR, even 6 which is still 9 months away is inferior to HBM2 in every single aspect besides price, and I don't really give a fuck about price when I'm throwing over half a grand on a GPU

And it will be inferior to HBM3. GDDR is cheaper though.

That's what I said.

Capacity maybe? HBM2 tops out at 16GB at the moment, though that seems enough even for a compute monster like P100.
Also 8GB stacks are coming out sooner or later, 32GB will be enough until HBM3

Intel seems to have found the answer to the high interposer and TSV cost in EMIB.
Which is rumored to star in Kabylake-G late in the year with HBM2 and a Radeon GPU

It's like lego honestly, these APUs aren't using custom buses for GPU CPU communication.

I thought Vega cards existed, but they were being deployed as workstation or grid computing cards?

>Radeon GPU
kek now intel igpu will have abysmal drivers too

But will it support async compute?

hothardware.com/news/intel-kaby-lake-g-series-leaks-with-heterogenous-multi-die-design

eteknix.com/amd-unveils-vega-based-radeon-instinct-mi25-gpu/

Can't be any worse than the inability to run Diablo 2 or Warcraft 3 without freezes every 5 seconds on a Intel HD4000

>async compute
here comes the retard regurgitating amd marketing buzzwords

>async compute does not exist REEEEEEEEEE

>Intel seems to have found the answer
yup, using AMD patent is the answer.

>and a Radeon GPU
No.
Intel have supposedly licensed AMD iGPU related IP after the very same deal they held with nVidia expired a few months ago.

It's just a hack to mitigate amd's abysmally bad gpu architecture.

Yes, and so is HBM3 also coming next year.
Up to 64GB per stack and multiple-terabytes-per-second of bandwidth.

GCN is godlike though.

yeah sure
that's why they need (((async))) because amd cheaped out on ROPs and half their shaders stay idle

That's an arch design. GCN leverages compute. And it aged better than anything novideo.

>cheaped out on ROPs
aren't you confused there? nvidia the one that cheaped out on ROps.

>what is fury x
>aged better
you men finally get thir heads out of their asses and managed to delive somehow acceptable drives which still have terrible OGL support

>A $550 290X
>say as the 390X in the attached image
>from 2013
>still +/-5% of the $650 980Ti at 1440p from 2 years ago
>regardless of async compute being used or not.
>bad architecture reeeeee!!!!!

Now I bet you'll go post day 1 980Ti release benchmarks of the old GCN drivers, faggot.
GCN is just a plain superior architecture to anything Nvidia has released the past 6 years.

>amdpoojeets are so retarded that they fail to quote the right person
colour me surprised

>Hawaii STILL kicking ass
>keklers in the trashcan
Motherufucking hilarious.

I quoted the right person, dumb fuck. Delete your embarrassment.

Well you need your own personal nuclear reactor to give enought juice to the 9000000000000000W
r9 furry xxxxyxxxyxx.

1070 at least does not need custom loop with nitrogen and 2000W ps

>not tested: Fermi, because it'd be far too embarassing. But go ahead and keep testing the 2011 280X/7970.