Will they disable the "Tensor" units on gamer hardware (and just waste die area)...

will they disable the "Tensor" units on gamer hardware (and just waste die area), or will chips like the GV102 and GV 104 not have these units at all?

Other urls found in this thread:

devblogs.nvidia.com/parallelforall/inside-volta/
en.wikipedia.org/wiki/Tensor_processing_unit
twitter.com/NSFWRedditVideo

Is tensor core just another name for cuda cores, also known as shader cores or stream processors?

Who the fuck cares its another Pascal.

It FP16 X FP16 i think

It's not wasted die area. They "bin" the bad chips otherwise it'd just be trash

Like, they had all the R&D time in the world and all we got was another Pascal?

no, i think it's special hardware that does 2*FP16 MUL + FP32 ADD. assuming it will be useless for gaming.

>talk about Pascal
>Who the fuck cares its another Maxwell
>talk about Volta
>Who the fuck cares its another Pascal

not really. if you google for pascal block diagrams, you'll see that the SMs are different.

> 815 mm^2

holy shit! is it the largest CPU/GPU ever made? what's the die area of jewtel's largest xeon?

Consumer models will have no Tensor Memes so it's going to be another Pascal.
Intel had Poulson with ~700mm^2.

try again newfag ;)

devblogs.nvidia.com/parallelforall/inside-volta/

> Unlike Pascal GPUs, which could not execute FP32 and INT32 instructions simultaneously, the Volta GV100 SM includes separate FP32 and INT32 cores, allowing simultaneous execution of FP32 and INT32 operations at full throughput, while also increasing instruction issue throughput.

how big the the latest xeon phi? (knight's something)

Landing?

>Consumer models will have no Tensor Memes so it's going to be another Pascal.

No way they disable it, it almost half size in die. nvidia will make it work in game

How? It's intended for ML.

Wow they separated the shaders. It's still Pascal.

It's the same idea as
en.wikipedia.org/wiki/Tensor_processing_unit

Oh they WILL disable it, and sell models without it disabled for hundreds more.

Each Tensor Core provides a 4x4x4 matrix processing array which performs the operation \textbf{D} = \textbf{A} \times \textbf{B} + \textbf{C}, where \textbf{A}, \textbf{B}, \textbf{C}, and \textbf{D} are 4×4 matrices as Figure 7 shows. The matrix multiply inputs \textbf{A} and \textbf{B} are FP16 matrices, while the accumulation matrices \textbf{C} and \textbf{D} may be FP16 or FP32 matrices.

sorry, copypasta didn't work. read the original text here...

devblogs.nvidia.com/parallelforall/inside-volta/

This. It's a rebrand.

They probably won't even have those on die in consumer models.

Amd hype fp16, Nvidia now with all new tensor core. Amd step behind again

>We literally have another 20 days before we even see Vega for 3 minutes

And it'll be basically Pascal. Meh.

it's got lots of those little green squares like Kepler! Volta is just a Kepler rebrand!

And Kepler is gutted Fermi.

and Skylake is Nehalem on steroids

and Sup Forumstards don't know shit about microarchitecture

And Nehalem is drugged P6.

>implying Sandy Bridge wasn't just Pentium 5

3bilion usd for a rebrand, come troll. Nvidia not that stupid

It's waste of space on consumer chips. I'm sure they remove most of the 64 and 16 bit units.

And then it's Pascal.

Gameworks

This really doesn't seem that impressive, just basic 1/2 rate for high end cards.

>This really doesn't seem that impressive, just basic 1/2 rate for high end cards.
Because it's nothing impressive. Fuck it's only 15tflops FP32 on boost clocks with ~800mm^2 die. Weird.

it's not targeted at fp32 gaymers, it's targeted at HPC and ML

Which is fucking FP8/FP16 at best.

Who cares?
if you want to do 2xfp16 calculations this wont make it go any faster as output is bottlenecking, there is no real advantage of this shit, at least by the info nVidia gave.

hpc is all fp64 you dummy

HPC is extremely small and GV100 is presumably aimed at ML market. Which uses FP8/FP16. And ASICs.

what's the yield on this?
1.7%?

Ye. Rev up those memes.