So how much of a jump would Volta be compared to Pascal and Maxwell?

So how much of a jump would Volta be compared to Pascal and Maxwell?

Other urls found in this thread:

nvidia.com/en-us/data-center/tesla-v100/
devblogs.nvidia.com/parallelforall/inside-volta/
twitter.com/SFWRedditVideos

10-15% from pascal

10-20% perf with 10-20% bigger dies.

So Pascal 1080Ti is about 11.5 Teraflops....Volta will be 14 Teraflops? That's insane.

Also nVidia will switch to HBM2 starting with Volta so that will definitely make things faster I think.

Went from 12.5Tflops to 15Tflops, on a freaking 800mm2 chip. There is a reason why porkchops pressing so much on teslav100 marketing, they want to return at least some R&D because product is mediocre.
That's it, downscale it to chip x3 smaller.

Both companies prepare for MCM, this gen going to be lackluster.
Or Nvidia simply ran out of ideas and need time for new uarch, like AMD did, they usually have 5 years cycles(GCN->vega)(fermi->maxwell).

>So Pascal 1080Ti is about 11.5 Teraflops....Volta will be 14 Teraflops? That's insane.
No since Paxwell already hits scaling issues with this much ALUs.
GP102 is over 50% bigger than GP104 yet only 30% faster.
>Also nVidia will switch to HBM2 starting with Volta so that will definitely make things faster I think.
Hell no, JHH will never sacrifice his shekels.
V100 has fuckton of useless (for gaymen) fixed-function shit.

>Also nVidia will switch to HBM2
they would have to pay a lot of royalties to AMD
they do HBM only in small volume because of that
HBM going mass market though, because it's cheaper in the long run. Heck even SSDs are going 64 layer.

nvidia.com/en-us/data-center/tesla-v100/

They're literally using HBM2 on the V100's.

They will likely move to HBM2 as it's becoming a little more mainstream now.

he means on consumer cards that sell by millions not a few thousand

Toshiba recently announced 96layer BiCS 3D NAND (which, hilariously enough, uses TSVs).
V100 is a very low volume product.
Porkshoulders-kun will never use HBM2 in mainstream shit unless DDR PHYs are seriously bloating the die.

no dude...
it's a fucking insane jump

user, please translate marketing speech for yourself
leathehrjacket marketed tensor cores like they are something new, they aren't
google has tensor cards that are x20 faster than volta and draw less x5 less power
they had to do whole tensor bullshit because actual GPU performance jump is small, cheap trick

V100 would be an interesting all-in-one ML offering if not the fucking die size thus price piercing the heavens.

I think there is a huge reason why nvidia makes only 300m from compute cards and almost billion from gaming cards
asics are better at deeplearning now. simple as that.

GPUs are good at inference though.
AMD needs to make lower clocked SFF Vega10 Instinct card for inference.

I don't know but AMD is totally gonna beat it, just you wait

It's a race now.
The first company to achieve MCM GPUs will win the market for several years.
Volta (as in V100) is a big oversized meme.

i feel like gpu's havent really progressed past 2013 with the 290 and 2016 since the 1080

We seem stuck in a bit of a loop

When Volta and Navi drop in 2019 will be the time to get a new gpu

Volta is a fatter Paxwell and we don't know anything about Navi.

They used HBM2 for the Pascal Tesla too, so why isn't Pascal using it?

Pascal wasn't even intended to exist, it has a lot of the hardware improvements Volta would have had vs Maxwell. Pascal is essentially the midway point.

just wait(tm)

>They would have to pay a lot of royalties to AMD
HBM2 begin free standard from JEDEC.

Volta is next architecture,something like 8800 series in Performance/watt.

Poor AMD

NVIDIA would need a brand new architecture for that, Volta is still based on G80

Just like Pascal was the next architecture...
...right?
No.
That's not going to happen any time soon.

>That's not going to happen any time soon.
It doesn't need to, G80 was a fantastic architecture.

Does it really matter?

Even if it would be a complete stalemate compared to Pascal they would still be miles ahead of AMD in both performance and performance/watt

The amount that AMD is behind in the high end gpu segment right now truly marks a low point in high end gpu competition.

The key word is "was".
Core was also a fantastic evolutionary uarch.
Too bad Zen murders it violently.

How well it would compare with a 3Dfx part from an parallel universe where they're still a thing?

How does that apply to G80?

You're dragging in completely unrelated stuff

>The key word is "was".
Not really, unlike Intel they aren't showing any signs of struggling to get more perf out of their architecture.

Intel haven't made any significant gains since Sandy Bridge while Kepler -> Maxwell and then Maxwell -> Pascal were significant on their own.

I would guess 10%, but with new non-monolith design making yields 100% better.

Getting moar performance with GPUs is piss easy: just add moar ALUs.
Pascal is exactly that.
Volta is exactly that (now with moar fixed function units).

Except Pascal is more power efficient too. The 980 Ti could pull over 1kW under LN2, you can't even do that with Pascal.

Wow, a two node jump!
Un-fucking-believable.

Kinda.
Shaders an shiet are starting to need more conditional branching performance, which means having smarter schedulers, more fine grained core clusters etc..

Too bad AMD can't even achieve that much.

Ha ha.
(You) tried.

>Getting moar performance with GPUs is piss easy
Yet AMD is unable to do it

Oh look it's this clueless shill again.

Are you going to go ahead and make up more ((((facts))))?

Yes, AMD learned it the hard way with Fiji.
(You).

You know that both AMD and nvidia don't actually control the node jumps, right?
Performance yes, you can discuss and argue that AMD did squat with the node jumps.

But both are sitting on their asses, waiting TSMC or GF to jump.

>AMD learned it the hard way with Fiji.
Learned what, that they were incapable of making a high end GPU?

>that AMD did squat with the node jumps.
Oh they didn't do just that, they actually went backwards. Worse performance at the same clockspeeds.

Clearly it'll be capable of 8k 240 fps in every game

(You) are trying a little bit too hard.
You should stop drinking JHH's semen.

You're in a thread about Volta, go fuck off back to the Vega general and concentrate your autism and stupidity there you clueless child.

>retarded shitposter calling someone else clueless
?

(You)

(You)

>tfw even AMD want to pretend RX Vega doesn't exist because it's garbage
>shills still convince themselves it's not garbage or just say to wait for Navi

If you went a bit more high level on this discussion, you probably would find some nice, actually nice arguments against AMD and their practices when designing GPUs, but instead you just went edgy 12 year old.

What he's pointing out with fiji is that AMD was just chasing muh GFLOPS blindly and that ended not being the best of the ideas.

Except that's all they could do, Hawaii was a bruteforce architecture designed to combat their lack of software optimization and push for lower level APIs to shift the responsibility of harnessing the compute power of the GPU to the game developers.

And Vega is exactly the opposite of that.
And the only way to get moar perf is moar ALUs even for nVidia now.

>And Vega is exactly the opposite of that.
Rumored to be, there's nothing to even suggest half the features they advertised exist. People keep claiming they just need to be 'enabled' in the drivers but nobody has confirmed the hardware is even there to enable. The only new hardware feature AMD even demo'd was the HBCC.

Well, they need to hire better software engineers.
Also Vulkan don't shift the responsability to the game developers, not entirely.
It shifts it to the open source C++ compiler that vulkan use to compile the shaders.

Considering Vega FE rivals P6000 where it works, i think it's up to the working drivers.

Vulkan is incredibly convoluted.

>where it works
None of those features are involved in that, and no it doesn't rival the P6000 it loses in just about every benchmark out there.

>inb4 posting the same graph you always post
Just stop posting altogether please you don't know what you're talking about.

>None of those features are involved in that
WHAT?

I bet either vega or the next architecture will solve shit with neural networks instead of real silicon job.

Worked wonders for the Ryzen.

Yes, i'm painfully aware of it.
But i'm just adding more victims to the vulkan hell.

>people complain about all the hardware features not being enabled in drivers and that when they are enabled the gaming performance will have a significant performance increase
>same people cite the professional benchmark performance of FE
>"What do you mean they're not involved with these workloads??"

Now that's some next-level shitposting.

Are you actually fucking retarded? Or are you trying to imply these hardware features are only enabled in these professional workloads?

Пoдyмoй.

devblogs.nvidia.com/parallelforall/inside-volta/

>The new Volta SM is 50% more energy efficient than the previous generation Pascal design, enabling major boosts in FP32 and FP64 performance in the same power envelope.
>15 TFLOP/s of single precision (FP32) performance
>7.5 TFLOP/s of double precision floating-point (FP64) performance

THANK YOU BASED NVIDIA

AYYMD HOUSEFIRES SIMPLY CAN'T COMPETE WITH REAL PERFORMANCE & POWER EFFICIENCY

Thanks for confirming

Hy пoдyмoй.

Unlike Intel, NVIDIA aren't just being complacent with their superior hardware and dominance in the major markets.

If AMD don't step their shit up in a major way with Navi (seeing as Vega is already a massive failure) then they may never recover.

Unlike nVidia, Intel has actual revenue and produces something else besides toys.

Funny how Intel are incredibly threatened by NVIDIA then.

Where? In HPC where there's barely ANY money?

>barely any money
>NVIDIA's revenue grows 48% year over year purely from the data center

Wow, it was $150 million and now it's $300 million! Intel cannot compete!

>In the recent quarter ended April 30, NVIDIA's revenue increased by 48% reaching $1.94 billion compared to previous year. A big revenue bump came from its Data centre business which recorded $409 million revenue in the first quarter of this fiscal, up 186% year-on-year.

It's time you went back to Sup Forums you utterly clueless child.

25% min

30% would me my guess

45%+ is god tier architecture

Wow, Intel literally cannot compete with their ~$4 billion quarterly data center revenue.
Literally finished and bankrupt.
Long live GPGPU.

...

Yes, this year will totally be the year of GPGPU compute!
Fucking hell GPGPU is literally linux desktop of hardware world.

Great shitposting there Sup Forums

Jensen fucking stop.
Everyone knows GPGPU will always be a niche meme in datacenter.

>niche
Ahahahahahaha oh wow Sup Forums you're adorable

They could literally release the same line up, call it Pascal Pro and drop prices by $70 on each GPU and people would still buy it.
Then they could do it again and drop another $70 while naming it Voltaris or something.
By 2020 when AMD releases Navi and finally manages to reach 980 Ti performance which now would cost only $500 they could release Volta.

Nvidia could do all of that just like AMD did with R200, R300 and RX400 and they wouldn't lose 1% of market share, that's how far ahead Nvidia is.

Oh yes call me when GoyPU can work as fileserver.

You're not even trying now Sup Forums come on

Jensen stop.
This is getting hilarious.

What's wrong Sup Forums have you run out of shitposting material? Consider checking Reddit again since you clearly do that on a daily basis.

But R200/R250 was shit.
ATi did the smart choice of canning it and developing R300.

This year will totally be the year of GPGPU compute!
I-i swear.
Buy our GPUs.

Oh maybe you actually were trying but you were just too stupid to come up with anything decent.

>E-everyone is totally going to use GPGPU compute, you just wait CPUdrone!

>muh gaymes

(You)

1080 Ti*

*rumors