>>60325020

You made same thread already. It died. No, Volta is beefier Pascal.

Other urls found in this thread:

videocardz.com/68948/skhynix-gddr6-for-high-end-graphics-card-in-early-2018
twitter.com/AnonBabble

Is it time for Nvidiots to JustWaitTM half a year+ for consumer Volta crap?

Hynix seems to agree
videocardz.com/68948/skhynix-gddr6-for-high-end-graphics-card-in-early-2018

What does "early" 2018 mean? March, or January?

Going by that they didn't mange to get it out for the holiday season, I'm assuming we're not looking at January.

tl;dr you're gonna be waiting for Volta like you waited for Vega, then you're gonna be waiting another 6 months for Navi unless AMD gets enough cash to rush it earlier

Yes.

That's the only way I think Vega has a chance. A full year of Volta-less competition.

Volta is simply a beefier Pascal.

> GPU
> No connectors for connecting it to an actual screen

Perhaps it's got some specialized use-cases that I'm missing.

Can it mine crypto like zcash and eth?

Lol fuck consumers Nvidia is becoming a machine learning company

Or should I say The machine learning company.

Who created The deep learning libraries that everyone uses?

Who's GPGPU framework is The one to use? Nvidia, with CUDA

What was used to make almost every major deep learning discovery over the last several years? An Nvidia card using Nvidia CUDA and Nvidia CuDNN.

Praise Nvidia

While that's true 12nm allows them to stick more cores on it, which is a legit way to increase performance, not a very future-looking one but it's still legit.

Also nothing is stopping Nvidia to make bigger dies for Volta compared to Pascal, so instead of 470mm2 GP102 we'll get 550-600mm2 GV102, and 330mm^2 vs 420mm^2 for GV104

In fact Nvidia did exactly this for their GTX600>700 refresh, both were Kepler

At lesser speed than a Radeon Pro Duo card, but it can
Just for 150000 dollars

N E U R A L N E T W O R K S

Artificial intelligence
Inference
Currently the major bottleneck is bandwidth,so they made nvlink and now they make it even better.

Next will be memory. Expect 128gb+ VRAM GPUs in the next several years. Though they may stop being GPUs and a whole new market will open of TPUs, Tensor Processing Units.

>Perhaps it's got some specialized use-cases that I'm missing.
Its for computing you tard, not graphics. Similar to the Xeon Phi, except it cant boot an OS. Also their GRID cards dont have display connectors and are intended for use in virtualized desktop environments.

>
>While that's true 12nm allows them to stick more cores on it, which is a legit way to increase performance, not a very future-looking one but it's still legit.
Nvidia promised Vega was an all new arch
>Also nothing is stopping Nvidia to make bigger dies for Volta compared to Pascal,
A bigger die on a new process it's ridiculously expensive

So you mean they are going to commit suicide the moment ML FPGAs become a reality? Wew.
It's accelerator intended for pure compute tasks, hence no display output.

Most likely Q1

>ridiculously expensive
When has that been a problem for Nvidia? There's not much difference between 12nm FFN and 16nm+ anyway.

NVIDIA has no proper interconnect to throw that much memory.
Ah, well, it's probably a preemptive measure against Vega. Also 7nm FP64 Navi will be anally devastating for NVDA ML market.

In a few years ya dummy

They're developing nvlink first ya big dummy

NVLINK is card to card, not die to die interconnect. NVIDIA needs something like IF.

But Nvidia has no CPU to take advantage of IF

>Artificial intelligence
yeah because that always ends well for humanity

I miss her.

They will need IF when Navi MCM's multiple dies in the same package

That would actually explain why V100 is 820mm^2

It's inevitable so better get ready lel

Yes again I said in a few years, obviously they would need to work on it. The point is there is going to be a major demand for increasing amounts of VRAM to absurd levels. If there is demand there will be supply. Whatever needs to be done, will be, money will be thrown at it until it works.

>amounts of VRAM to absurd levels.
That's why NVRAM and Radeon Pro SSG exist, Nvidia hasn't shown any intention of moving to anything similar.

Or you ignore VRAM and copy and paste HBCC.

That explains a lot of things but sheer potential of MCMed GPU's is kinda insane.

Probably May. Nvidia's always releasing GPUs in May.

>that fucking Deus Ex design