"Poor Volta" AMD said

>NVIDIA Volta GV100 GPU Based Tesla V100 Benchmarked – A Monumental Performance Increase in Geekbench Compute Test Over The Pascal GP100 Based Tesla P100

>NVIDIA’s flagship and the fastest graphics accelerator in the world, the Volta GPU based Tesla V100 is now shipping to customers around the globe

>The score puts the Tesla V100 in an impressive lead over its predecessor which is something we are excited to see. It also shows that we can be looking at a generational leap in the gaming GPU segment if the performance numbers from the chip architecture carry over to the mainstream markets. Another thing which should be pointed out is the incredible tuning of compute output with the new CUDA API and related libraries

wccftech.com/nvidia-volta-tesla-v100-gpu-compute-benchmarks-revealed/

...

>geekbench

>815mm2 and 15 tflops
Nope.

>top results are all Linux
PAJEETS BTFO

...

>compute
>opencl
>cuda
>Sup Forums

computer illiterate: the post

It's because the Tesla cards are all run on servers while the others are workstations. Not saying you can't mix and match it the other way around, but you're really not going to see an HPC Windows server nor a workstation with Teslas it can't effectively cool, so by default all Tesla results will be on Linux.

All important software run in Linux server.

Geekbench don't use tensor core,16bit or 64 bit computer, redesign SM make massive fast Volta.

GTX X80 will be 70-100% fast 1080 with over 700 GB/s GDDR 6.

7nm GPU in 2019 Q3
RIP Vega

...

But all the card use the same bench, the P100 will be faster in another benchmarks??
TFLOPS mean nothing on parper

AMD can't blow the GP100 now this, ayyyyyymd

Only $20,000 for meme learning card, buy yours today!

>TFLOPS mean nothing on parper
>TFLOPS mean nothing
>in compute benches
Anyway, why would i buy this overpriced turd thus paying the retarded GPU tax instead of buying DLU or Crest Lake or whatever new meme processor will come forth?

Worst case GV100 double performance GP100 just using %50 more TFlops means Volta begin way more efficient arch over just moar flops.

The only reason Tesla cards are so powerful is the fuckhueg die. Consumer Volta will be 15% faster if you're lucky. Kill yourself, retard.

Fiji is close to 80% more efficient per CU compared to Tahiti, user.
And it's the SAME node.
Throwing moar ALUs means nothing.
The only good part of meme100 is CUDA9.

The technological ignorance of Nvidiots is just astounding.

>t. AMDfag rage over please volta be bad.

Stop shitposting.
If you want an HPC peenus-weenus measuring contest then Vega20 comes somewhere 2018.

Sure people love programming OpenCL in HPC

They will once AMD actually bothers pushing ROCm instead of assasinating Quadro in CAD or modelling or whatever.

AMD's never going to catch up. Nvidia actually provides developer support for their big partners and even deploys developers to certain partners. They're very proactive and actually want the market, unlike AMD who'd rather push out some half-assed FOSS solution with 0 support. AMD is a company that doesn't want "it". They have no drive.

>Consumer Volta will be 15% faster if you're lucky. Kill yourself, retard.
damn thats straight up delusional

Aaaaaaaaaaaaaaaaaaaaaaaaaaaaand that's where you're wrong. Just look at ProRender.
>Nvidia actually provides developer support for their big partners and even deploys developers to certain partners.
So does AMD. But it's mostly VFX or CAD and not usual GPGPU stuff.
>They're very proactive and actually want the market
So does AMD.
P47 is a living example of AMD wanting to capture (and they will) the market.
>some half-assed FOSS solution with 0 support
I wouldn't call ProRender that.
Bigger dies are bigger dies.
There are no ways around it.

Just remove tensor core, 64 bits ALU and GV100 will down less 600mm

And somehow it will only be 15% faster?
Nvidia doesnt even need to release volta to achieve 30% improvement rebrand 1070 AS 2060 and 1080ti as 2080, congratz just made 30% gains.

It still means that 60% margins are dead.
That's what V-series SKUs will be.

and you think this is bad, even though you get more for same amount of $?

>HPC
>ROCm

i have no idea what the people in this thread are talking about

they think ppl who actually use tesla cards actually checks reviews for there performance numbers, these ppl are tech illiterate, and they will get tesla cards, because they always had tesla cards and not some piece of shit amd shitters

you talk like a pajeet

>HPC
High performance computing.
1U-4U racks full of GPUs doing FP64 numbercrunching.
That or HUEG CPU farms doing everything else that is bound by memory capacity.
>ROCm
Radeon Open Compute platforM - AMD's attempt at fixing OpenCL for Radeons, by adding shit OpenCL can't do and writing basic libs (and it's also a vendorlock). Still in it's infancy though. AMD will start throwing money at it probably in a year or so.

does this mean that leenux will have to use ROCm and it will have danker graphics?

No.
It's compute.
And ROCm is meh now.

whats a better card for getting uhhh
free wifi

>ROCm
>platforM
Was there any specific reason to avoid the letter P in that abbreviation?

Dunno ask AMD.
It was originally called Boltzmann initiative.

>Boltzmann initiative
>tfw you try for a cool sciency name but end up with a secret Nazi project instead

>secret Nazi project
And CUDA was Compute Unified Direct Architecture once.

No argument, at least, can you elaborate?

>amdrones
>arguments

kek

See
Kill yourself.

>I sure love overpriced Nvidia gpu's almost as much as I love sucking dick.

The post

I too love 2 wafers 1 die designs.