THANK YOU BASED NVIDIA

THANK YOU BASED NVIDIA

AYYMD IS FINISHED & BANKRUPT

AYYMDPOORFAGS CONFIRMED ON SUICIDE WATCH

DELET THIS

>15TFlops

dude my 200 bucks RX 480 has fucking 6 TFlops. I'll just crossfire 3 of them together and beat Volta. Too bad Nvidia's SLI doesn't do 3 way KEK software gimped shit

Tensor what?

>AMD's answer to Pascal isn't even out a year later
>Nvidia is already has a Volta chip ready
Based

buzz words for tech blogs like Gizmodo and the likes. Kind of like Apple's Retina Display meme

>Thank you for making a product that will contribute to your eventual monopoly and freedom to raise prices as high as you want!

Blame AMD for that

based. can't wait to buy volta just because i can

How does this benefit you in any way?

From this chip consumper gpu's will follow, obviously

it doesn't. Shitposting AI will replace Indian marketeers.

>does 15 Tflops at 5120 cores
>Vega does 12 Tflops at 4096 cores
Granted, we don't know what the core clock will be at for the Volta chips but I'm guessing AMD actually did it and has Volta-level performance.

CANT WAIT MORE

>815mm^2
jesus

there will never be a consumer variant of this chip

...

no deep learning libraries for AMD cards at all. AMD doesn't even compete in the fastest growing compute-intensive market.

I didn't say there would be a consumer variant of this chip, I said consumer chips would follow.

As was the same with Tesla p100, that spawned Pascal consumer gpu's

Hardware created specifically for deep learning.

Bullshit. Tensors are what deep learning researchers/professionals call 3+ dimensional arrays/matrices. I agree it's redundant, but Nvidia allocating sections of hardware specifically for deep learning is more than just use of buzzwords.

> but is useless for gayimg

Excuse my but could you please delete this? Please delete it, my best friend just built his AMD #TeamRed Gamer PC featuring the RX 580 and Zen 1700. He is thinking about suicide since the Volta announcement, please delete this post and help me and my friend out.

Thank you.

But can it run Crysis?

why don't the retarded library authors use opencl?

DILIT

I honestly don't know, but I do know that CUDA is faster/better in some way. Tensorflow implements both a cuda and opencl backend and the cuda implementation is way faster.

You sound like a literal shill.

Disappointing flops for 12nm + 800mm^2+ chip

Maxwell was 50% faster on 28nm at the same size.

It's the truth though. AMD doesn't compete in that market. I don't even mean they're failing to compete I mean they *do not* compete. They *do not* write their own deep learning libraries like Nvidia does.

OpenCL version was a rush port, original codebase is CUDA

AMD bros, what do we wait for now?

Vega 12.5 TFLOPS 349$
Volta 15 TFLOPS 899$

>AMD doesn't even compete in the fastest growing compute-intensive market.
Except Vega based Instinct 25 (12.5 16*2 tflops) is already shipping to the customers.

Retina is not just a stupid buzzword. I really dislike Apple, but they have the best laptop monitors by far.

>why don't the retarded library authors use opencl?
Amd hasnt released a card better than 290 for years.
Until now.

...

THANK YOU BASED NVIDIA!

>Volta 15 TFLOPS 899$
it's actually 20 times as much

Shit fuvk stupid nigger Retinaâ„¢ has less pixel density than 27" 4k

fuck yes I love gpus that cost the same amount as a pc

Well I would love to see AMD go bankrupt and just look at the Nvidia fanboys realizing what a company does when there is no competition.

>picture
Is that a two dies glued together like AMD is planning to do with Navi?

Bad news for the reds if Nvidia managed to pull that one first.

it for sure is a buzzword to describe a certain resolution and dpi

Retina means apple decides when you are supposed to pixels, nothing to do with the quality of the display

No, it's a massive 800mm monolithic die with 4 HBM2 stacks
Incredibly disappointing if this is Nvidia's answer to Radeon MI25

AMD dying would be the canary in the colemine for x86. Intel clearly isn't capable of innovating without someone poking them in the ass, and NVidia will just start charging $1500 for a GPU. Would only be a matter of time before cheap ARM CPUs take over.

>hurr durr AMD's patents will go to companies that can actually use them

They probably won't go to the right companies. Or they'll go to Intel's direct competition in other spaces like ARM.

It is you retard, just an IPS panel + high dpi.

Intel might even get broken up. Wouldn't that be fun.

Mi25? Nah, this isn't targeted at the Mi25, but 7nm Navi coming out in 2018.

>800mm monolithic die
Lol
>PUT BIGGER CORES
If thats their answer its over.

the stocks to crash, with no survivors!
So that we can ride the high again in a few years

Retina is just high ppi in display, no patent,technology or process.

>>>PUT BIGGER CORES

Compared to P100:

41% more fp32
42% more fp64
34% bigger die on 12nm
same clocks


Not even that.
It's just add moar corz

It barely beats the Mi25 while being on a smaller process and possibly much bigger die
Navi would destroy this

Finally, 1.7% yields lives again.

So there is a small scaling advantage to 12nm TSMC
Though it's so small it might not even be worth making new photomasks for it

Nvidia hasn't even pretended to launch their best on a consumer product since then, so 1.7% yield aren't really alive

Are you a retard or what?

Mi25 only has slow 1/16 FP64, it's not a real GPU for scientific work like V100's 1/2 FP64

But it has 1/2 fp16, which is for machine/deep learning

2:1, not 1/2

>815mm^2

So now you just have to sell a kidney to pay for that crap?

It's 15000$.

Nothing uses FP64, if you need that kind of precision you probably are better off using Xeon Phi's and AVX512
For ML FP8 is more than enough
Also, we don't know if Mi25 is 1/16 FP64, we only know it supports packed math

These are more like 4-5 top-of-the-line PC's

but, you're not a poorfag, right?

- Until recently opencl has been a pain in the arse to develop for
- CUDA is relatively nice
- nVidia put a lot of effort into very optimized libraries like cuDNN which a lot of frameworks are based on
- Nobody but AMD is trying to make NN for Opencl work (Intel has their own stuff, qualcomm has their own NN chips/frameworks)
- OpenMP is a competing standard to opencl in a lot of ways
- nVidia pays a lot of money to R&D to stay at the forefront of NN

That said. once NN Fpgas hit the market this revenue branch will collapse for gpu makers. It will be similar to bitcoin mining and AMD. GPUs are way better at deep learning than CPUs but they also bring a lot of baggage that is not needed and could be circumvented by dedicated silicon.

>The new Volta SM is 50% more energy efficient than the previous generation Pascal design, enabling major boosts in FP32 and FP64 performance in the same power envelope.

RIP IN PIECES AYYMDPOORFAGS

NOOOOO

WHERE IS VEGAAAA

If it's 50% more efficient why is V100 only 40% faster with 40% more die size(on 12nm) at the same clocks?

Reminds me of Pascal
"Pascal is 10X faster than Maxwell in this very specific fine tuned application"

Because he's been huffing magical shill powered farts.

Source or you're full of shit, Volta is a 3 billion dollar design

Nvidia fanboys should be castrated. UHH UHH THEY SPENT 3 BILLION. Fuck off forever.

Like Pascal(which was dieshrunk Maxwell) was a "several billion dollar design" ?

Here's the graph, you don't have to be a fucking genius to use a calculator, GV100 is 34% bigger on a smaller node but is only 40%+ faster in general purpose applications at the same clock and wattage as Pascal.

tl;dr add moar corz

>HBM2
this is good news no matter which corporate dick you suck desu

>AMD bros, what do we wait for now?

Same as always, hope Vega is competitive, when it isn't just point out it's actually better cost/performance and call everyone else jews

And I spent 3 bucks on your mom, doesn't mean it was worth it.

>peak TFLOP rates based on boost clock
>BOOST FUCKING CLOCK

Wow, AMD uses base clock for their tflop rates.

FFS why can't viral marketers learn to use >. You guys need to get separate Reddit and Sup Forums personnel.

>~800mm2 die
1.7 is back baby.

It's like when poo in loo telemarketers awkwardly try to use english idioms.

Oh well, I though NVIDIA would actually make a new arch instead of reusing Maxwell again.

I only know my uni is pretty freetard-tier but the ML department still uses CUDA and Nvidia anything basically. They must be doing *something* right apparently

...

Looks nice. At least visually, Volta is another Maxwell.

Fuck off retard, nobody believes that

Volta is literally another Maxwell, GV100 has 15.3tflops FP32 which corresponds to increased die size.

CUDA. They offer support and functionality that OpenCL frankly doesn't.

It's another case of open source being niche and limited because it doesn't have the financial backing and specialized proprietary software being mandatory.

>40% faster with 40%+ more die size on a new node
>NEW ARCHITECTURE GUYS

I'd be real worried if it was a new architecture, the gains wouldn't be that low.
Volta will however be a lot of faster in certain matrix operations, as it's built for that.

knights landing is still better than this garbage.

Aaaaand Nvidia gets sued into oblivion by elon musk in 3...2...

So what was announced exactly? A die shrink, Quaddro card, and nothing for consumers until 2Q 2018?

If Vega is as good as it sounds and just pounds Pascal in to the dirt how is AMD finished? Volta still can't DX12/Vulkan.

Why all the hype for such a shitty card?

All hype died the moment everyone saw the die size.

It's just fanboys, retards and trolls working themselves up into a frenzy. Like usual.

that nvidia accelerator is 16 fp32, amds is 12.5 fp32 at its normal clock rate, granted these are accelerators with no video output at all, but the latest greatest and likely biggest silicon nvidia makes.

GV100 is some ~300mm^2 bigger than Vega10. What the fuck Huang.

>They *do not* write their own deep learning libraries like Nvidia does.

Why does nvidia do this? is it because if they didn't everyone would write it in an open standard and everyone would have free choice of what hardware to use?

Where the fuck can I watch the stream?

Pretty much.

They have incentive on controlling the market. Because they don't have the hardware to leverage the open standards.

On the flipside, AMD is too fucking broke to really get in there and compete on the software level.

Someone call the UN because Nvidea just war crimed AMD

Nvidia's Youtube channel when they upload it to there

Stop using Tesla name for EVERYTHING you morons. Let that great man rest in peace.

Why you want to watch it? It's 90% muh deeeeeep learning