>NVIDIA Volta GV100 GPU Based Tesla V100 Benchmarked – A Monumental Performance Increase in Geekbench Compute Test Over The Pascal GP100 Based Tesla P100
>NVIDIA’s flagship and the fastest graphics accelerator in the world, the Volta GPU based Tesla V100 is now shipping to customers around the globe
>The score puts the Tesla V100 in an impressive lead over its predecessor which is something we are excited to see. It also shows that we can be looking at a generational leap in the gaming GPU segment if the performance numbers from the chip architecture carry over to the mainstream markets. Another thing which should be pointed out is the incredible tuning of compute output with the new CUDA API and related libraries
It's because the Tesla cards are all run on servers while the others are workstations. Not saying you can't mix and match it the other way around, but you're really not going to see an HPC Windows server nor a workstation with Teslas it can't effectively cool, so by default all Tesla results will be on Linux.
Dominic Scott
All important software run in Linux server.
Jeremiah Carter
Geekbench don't use tensor core,16bit or 64 bit computer, redesign SM make massive fast Volta.
GTX X80 will be 70-100% fast 1080 with over 700 GB/s GDDR 6.
7nm GPU in 2019 Q3 RIP Vega
David Cook
...
John Rivera
But all the card use the same bench, the P100 will be faster in another benchmarks?? TFLOPS mean nothing on parper
Jacob Long
AMD can't blow the GP100 now this, ayyyyyymd
Jace Gomez
Only $20,000 for meme learning card, buy yours today!
Aaron Davis
>TFLOPS mean nothing on parper >TFLOPS mean nothing >in compute benches Anyway, why would i buy this overpriced turd thus paying the retarded GPU tax instead of buying DLU or Crest Lake or whatever new meme processor will come forth?
Thomas Adams
Worst case GV100 double performance GP100 just using %50 more TFlops means Volta begin way more efficient arch over just moar flops.
Charles Thomas
The only reason Tesla cards are so powerful is the fuckhueg die. Consumer Volta will be 15% faster if you're lucky. Kill yourself, retard.
Samuel Price
Fiji is close to 80% more efficient per CU compared to Tahiti, user. And it's the SAME node. Throwing moar ALUs means nothing. The only good part of meme100 is CUDA9.
Liam Johnson
The technological ignorance of Nvidiots is just astounding.
Luis Flores
>t. AMDfag rage over please volta be bad.
Anthony Mitchell
Stop shitposting. If you want an HPC peenus-weenus measuring contest then Vega20 comes somewhere 2018.
Aiden Kelly
Sure people love programming OpenCL in HPC
Cameron Wilson
They will once AMD actually bothers pushing ROCm instead of assasinating Quadro in CAD or modelling or whatever.
Carson Kelly
AMD's never going to catch up. Nvidia actually provides developer support for their big partners and even deploys developers to certain partners. They're very proactive and actually want the market, unlike AMD who'd rather push out some half-assed FOSS solution with 0 support. AMD is a company that doesn't want "it". They have no drive.
Aaron Baker
>Consumer Volta will be 15% faster if you're lucky. Kill yourself, retard. damn thats straight up delusional
Colton Sullivan
Aaaaaaaaaaaaaaaaaaaaaaaaaaaaand that's where you're wrong. Just look at ProRender. >Nvidia actually provides developer support for their big partners and even deploys developers to certain partners. So does AMD. But it's mostly VFX or CAD and not usual GPGPU stuff. >They're very proactive and actually want the market So does AMD. P47 is a living example of AMD wanting to capture (and they will) the market. >some half-assed FOSS solution with 0 support I wouldn't call ProRender that. Bigger dies are bigger dies. There are no ways around it.
Adam Long
Just remove tensor core, 64 bits ALU and GV100 will down less 600mm
John Green
And somehow it will only be 15% faster? Nvidia doesnt even need to release volta to achieve 30% improvement rebrand 1070 AS 2060 and 1080ti as 2080, congratz just made 30% gains.
Sebastian Jones
It still means that 60% margins are dead. That's what V-series SKUs will be.
Daniel Gutierrez
and you think this is bad, even though you get more for same amount of $?
Liam Bennett
>HPC >ROCm
i have no idea what the people in this thread are talking about
Noah Baker
they think ppl who actually use tesla cards actually checks reviews for there performance numbers, these ppl are tech illiterate, and they will get tesla cards, because they always had tesla cards and not some piece of shit amd shitters
Sebastian Gonzalez
you talk like a pajeet
Christian Adams
>HPC High performance computing. 1U-4U racks full of GPUs doing FP64 numbercrunching. That or HUEG CPU farms doing everything else that is bound by memory capacity. >ROCm Radeon Open Compute platforM - AMD's attempt at fixing OpenCL for Radeons, by adding shit OpenCL can't do and writing basic libs (and it's also a vendorlock). Still in it's infancy though. AMD will start throwing money at it probably in a year or so.
Christian Lopez
does this mean that leenux will have to use ROCm and it will have danker graphics?
Henry Williams
No. It's compute. And ROCm is meh now.
Jaxon Kelly
whats a better card for getting uhhh free wifi
William Ortiz
>ROCm >platforM Was there any specific reason to avoid the letter P in that abbreviation?
Hunter Collins
Dunno ask AMD. It was originally called Boltzmann initiative.
Luis Scott
>Boltzmann initiative >tfw you try for a cool sciency name but end up with a secret Nazi project instead
Austin Moore
>secret Nazi project And CUDA was Compute Unified Direct Architecture once.
Hudson Sullivan
No argument, at least, can you elaborate?
Nathan Powell
>amdrones >arguments
kek
Brody Robinson
See Kill yourself.
Jaxson Brown
>I sure love overpriced Nvidia gpu's almost as much as I love sucking dick.