Poor Volta

Poor Volta

Is this Vega all over again?

videocardz.com/74382/overclocked-nvidia-titan-v-benchmarks-emerge

Other urls found in this thread:

anandtech.com/show/12135/nvidia-announces-nvidia-titan-v-video-card-gv100-for-3000-dollars
reddit.com/r/nvidia/comments/7iq5tk/just_got_a_titan_v_this_morning_ask_me_to/
gpu.userbenchmark.com/SpeedTest/395529/NVIDIA-TITAN-V
twitter.com/NSFWRedditVideo

Even nVidia is hitting the scaling wall in graphics?
That's not nice, it's merely 5120 ALUs.

>big die size
>poor gaming benchmark
>expensive af
>wait for driver™

Nvidia gone full AMD RTG now

It’s amazing that it’s that fast at all, given that half the die space is taken up by FP64 and tensor cores which go unutilized in these tests.

Wow it's like GPU IHVs are all the same fucking shit.
It's till 5120 FP32 ALUs.

>this is a gpu for machine deep learning and not gaming
you all are fucking retarded

It's still perfectly usable for gayming 5120 FP32 ALUs.

yes, but nvidia didn't have gaymers with 3k dollars in mind when they made a gpu made for deep learning, did they?

Who cares? Vega FE wasn't for gamers yet it almost perectly predicted Vega 64 performance in games. Whether a GPU is marketed towards gamers or not is completely irrelevant. Sure it has shit that games don't use, like FP64 and tensor cores, but that doesn't mean doing gaming benchmarks on it aren't valid. Xeons aren't for gaming yet lot's of people pay games on them.

>yes, but nvidia didn't have gaymers with 3k dollars in mind when they made a gpu made for deep learning, did they?
It's still 5120 perfectly viable for gaymen FP32 ALUs, and it even uses GeForce drivers.

So I get why someone might choose a Titan over a Quadro if they were an amateur or enthisiast into 3d rendering or compute or whatever, but how the hell is an amateur supposed to get into deep learning? What would be the use case of a Titan V over a Tesla?

>What would be the use case of a Titan V over a Tesla?
Price.
And it's not headless, you can actually throw it into werkstation and connect some monitors to it.

More importantly it has GeForce drivers, which is a big no no if there's any precision work involved.

>precision work
It's aimed at meme learning market.

Just like Vega FE

...

...

First Volta is a new architecture.

I doubt the drivers are optimized for Volta.

The next gaming architecture is going to be Ampere.

>It's aimed at meme learning market.
With non-certified drivers. If you're building an AI-driven car you won't be using a Titan V,way too risky.

RIP in piss everyone else.

>With non-certified drivers.
Means shit for pure compute workloads.
"Certified" drivers are improtant for shitty vendorlocked CAD applications.
> If you're building an AI-driven car you won't be using a Titan V,way too risky.
You're not going to train self-crashing car on single fucking GPU, you need a cluster, preferably several.

anandtech.com/show/12135/nvidia-announces-nvidia-titan-v-video-card-gv100-for-3000-dollars

>Tesla V100 PCIe
$10000

>Tesla P100 PCIe
$6000

>Titan V
$3000

Not that bad

Vega Frointier has the same GPU than RX Vega, the GTX 2080ti will use the GV102 not the GV100.

Remove FP64 and tensor cores, small chip will clock high and gddr6 will gets 764GB/s, some user run luxrender benchmark say just consume 190W on load

Anyone else notice the Dx12 gainz?

Maybe Volta finally has sensible async compute, so even if it isn't a big leap in raw oomph, it'll be nice for future titles. Much like Vega but with sensible TDP and drivers.

GV100 already clocks high, ~1900MHz in Boost 3.0.
>some user run luxrender benchmark say just consume 190W on load
Well, Luxrender doesn't hammer FF units.

It's oversized as a tech demo
It's not a gaymen GPU
It's not a gaymen GPU
It's not a gaymen GPU

Nshitia can do whatever the fuck they want since AMD is so poor right now. They want to show off Volta's raw power for doing shit with CUDA. That's why there's this and a Tesla Volta out.

>It's not a gaymen GPU
5120 FP32 ALUs and video outputs make it very much a gaymen GPU.

>GPU made for AI and shit
>benchmarking GAYmen shit

What about the GP100?

>gaming drivers
>has standard gaming reference cooler, only in gold
>"not for gaming"

pfff, L2 cachelets

Yes because FP32 are only used in video games and never used for anything else. What the fuck are you doing on a technology board?

The one with video outputs (Quadro P100 or something) can also be such.
Did anyone game on P100?
As in benches?
You need to bin L2$ along with IMCs in Nvidia cards otherwise you'll get 3.5.
Yes, because FP32 is unusable for games.

>gaming reference cooler
are you retarded? it has their stock standard cooler. has nothing to do with the fact that it wasn't made for gayfags

If you haven't noticed it, Quadro/Tesla use their old, less edgy design. Like that one.

Nvidia puts all titan devices with the geforce drivers.

RX Vega = Vega FE
1080ti = Titan X
2080ti =/= Titan V

>Quadro/Tesla use their old, less edgy design
irrelevant.

>new card uses newer cooler instead of the old one
WOW! SHOCKED!

There is no GV102. Vega is all about deep learning. Gamers get the scraps, which is a Pascal refresh.

Vega Frointier still use the same GPU than RX Vega

>Yes, because FP32 is unusable for games.
I'll ask again: What are you doing on a technology board?

Actually, I'm guessing you didn't understand why I asked you this question in the first place since the only thing you seem to understand is vidya. Let me rephrase: What are you doing on a technology board when you don't even know what a IEEE 32 bit floating point is used for in computing besides video games?

And Tesla P40 uses the same GPU as 1080ti.
Btw did you know that GP102 supports packed int8 in 4:1 to FP32 ratio?

And it will have the same performance level on gaymes

What fucking stops me from using FP32 card with video outputs for games?

Vega Frontier is a general purpose GPU.
Volta is purely designed for deep learning and astronomically expensive to produce. Not for consumers.

nVidia would totally sell smaller GV102 as V40 for cheapo training/inferencing.
V100 is big and powerful, but not that cost effective unless you live in commiefornia.

Nothing. What makes you think FP32 + video output means only video games?

GPU need special hardware design circuits to textures,pixel fill,poligons,rasterization these part are Graphics part on GPU, Intel tried larabel but fail because don't get cover patent on it.

Rare thing is V100 had Graphics part, meaning we could get volta as consumer gpu, if V100 don't had graphics hardware Titan V won't run OpenGL,directx or Vulkan for games.

It doesn't mean *only* bibeogames.
It means *also* bibeogames.
Anything with FP32 and video outputs can be used for gayms.
Including said Titan.
Larrabee failure was not patents (everything but texture mapping was done by software), but horrific area inefficiency.
Only ~1/3rd of Aubrey Isle die was actually used for SIMD processing.
Compare it so any relevant GPU.

reddit.com/r/nvidia/comments/7iq5tk/just_got_a_titan_v_this_morning_ask_me_to/

Furmark

146 fps @ 2560x1440, no msaa.
Power consumption for whole system peaked at 387 watts (vs 108 idle)
Userbench

Lighting 582
Reflection 500
Parallax 766
MRender 385
Gravity 657
Splatting 233
Full results at gpu.userbenchmark.com/SpeedTest/395529/NVIDIA-TITAN-V
Power consumption for whole system peaked at 405 watts (vs 108 idle)

Poor vega consume more whole system 8700K,Titan V(815mm^2).

>If you're building an AI-driven car
You're thinking of the wrong card.

>headless passively cooled piece of shit
Right how do I throw it into my workstation.

You don't.
Another useful tidbit: sticking a workstation in the corner of your basement to figure out driverless cars isn't going to have a big ROI.

Those cards are highly reliant on external cooling.

Indeed it's oversized af.

Just wondering about the hype around Tesla, even though the improvement in gayming performance isn't that significant...

Well, while it's not that powerful in gayming, it's crushes in computing space.

> how the hell is an amateur supposed to get into deep learning?
Neural network programming is a LOT less complex than you think it is. The difficult part is building the network correctly and having hardware powerful enough that test learning doesnt take 60 years

THANK YOU BASED NVIDIA

I wouldn't even buy this for simulations and you fucking retards are complaining about vidya benchmarks

I'm going to buy one for the competitive edge in TF2.

Notice how Superposition in OP's image is higher than everything else.