Volta

So what is Nvidia planning with Volta?

>'12'nm process (still higher density than 14nm) but they only focus on improved perf/watt for GV100
>815mm die, 80 SMs, so no density improvement for SMs
>most of new work seems to have gone into the Tensor cores, but Google's Tensor chips are already ahead of Nvidia's offering
>36.9% improvement in SMs/clock over GP100 and rumored but not confirmed performance per clock (although we don't know how clocks scale for less SMs)

So what markets are they specifically targeting? For compute tasks it seems like they're trying to fend off Vega and possibly try to get developers to start programming for tensor cores in supplement to the existing cores.

As for gaming i'm at a complete loss for what they're trying to do, will they use the increased density potential to allow more SMs into the same space as previous dies, saving profit margin? Or do like they did with GV100 and increase only the perf/watt. If it's the latter, will they increase the die size of the chips to make them more competitive in performance? Or will they rely solely on the increase in clock speed & rumored slight increase in perf/clock?

What do you guys think?

I think they are finished and bankrupt

>I think they are finished and bankrupt
Seems about right

Going for raw efficiency isn't a bad plan, it wasn't until recently that it was a laughable concept to cram desktop grade components into a laptop or SFF system without starting a house fire, but now there is almost no real difference between any size of PC that you could want and performance, except for expansion capabilities of course.

they want the sweet cryptopucci

It might actually in to Direct X 12 and Vulkan.

It's "12"nm node which is rebranded 16nm FFN+ which is 20nm BEOL + FinFETs. Also 1.7 yields are gonna bite their ass.

>Nvidia
>using hardware scheduler ever again

>NVIDIA
>dynamic scheduling
Not even in hundred years, friend.

There's no density increases, GV100 and GP100 have the same amount of transistors per mm^2

At least it got more efficient due to better node.

Probably, but TDP isn't power consumption so that's up in the air.

I don't think it'll be higher than GP100.

Tensor is some kind high level matrix, used on deep learning, tensor core on GPU, tensor Google chips don't be same thing.

About Volta, maxwell was cut because don't happen node of 20nm, Pascal was refresh maxwell, Volta begins new arch in almost 5 years.

You're not gonna see the GV100 tested to its full capabilities anyway.

TPU2 is exactly the same, but cheaper, smaller and more efficient, user. Bolting down ASIC to GPU for the sake of meme learning is stupid.

Who the fuck cares? Volta for consumer is Kekler Refresh 2.0: electric boogaloo. It's also extremely boring when Radeon is about to show their first purely gaming chip since Evergreen glory days.

>purely gaming
Double rate FP16 is not purely gaming, user.

It's tertiary compared to bulk of Vega's changes.

FP16 is actually heavily used in consoles, I imagine some ports at least have some leftover FP16 code.

It's actually a really good thing for anything that doesn't need the precision of fp32

I don't really think anything on PC will use FP16.