Vega literally kills triangles

youtu.be/XC5Dy_b-kE8

Based Gamer's Nexus hosts AMD for triangle Holocaust

Other urls found in this thread:

devblogs.nvidia.com/parallelforall/inside-volta/
hpcwire.com/2017/05/10/nvidias-mammoth-volta-gpu-aims-high-ai-hpc/
twitter.com/NSFWRedditVideo

I use Nvidia :3

Oi vey those triangles were innocent.

Can anyone present a demonstration where 128x Tessellation shows any difference from 64x?
In everything I've seen even going from 32x to 64x was extremely minimal.

Seems like an exponential bump, going from 64 to 128; I don't think it'd be a subtle difference judging by those numbers.

Gaymershill can you guy fuck off

wouldnt this fuck up some screen space stuff?

that'd actually be proper useful for rendering fuck huge shit
I think some games already do this though

muh 6 gorillian triangles

>gpu literally decides not to render stuff and there is no option to turn this off
That's fucking retarded, why the fuck would I want a gpu that decides it just won't play my game, how the fuck did they even thought of that.
THE WHOLE POINT OF A GRAPHICS CARD IS TO RENDER VIDEOGAMES WHY THE FUCK DID THEY MAKE IT SO IT DIDN'T RENDER WHAT THE SHIT

Yep, it's going to throw away all the polys you can see instead of the ones you can't. Fucking brilliant.

>WAAAHH WHY DOES MY GPU REFUSE TO RENDER TRIANGLES THAT ARE NOT VISIBLE?!!!

z culling has existed for ages, this is a better way of doing it

Innovating at the same time that surviving, i wonder what AMD would do with massive resources like Intel has.

I don't fucking get it.
Vega has so many great features over GCN4 that should make it perform so much better in games than it currently does, but instead it's worse than Fiji clock for clock while drawing more power.
On the other hand, it demolishes everything else in workstation performance.

So why the fuck does it have such a piss poor gaming performance?

We'll see if this is working or not when RX Vega launches.

drivers maybe

>rendering

You're a fucking idiot

because it's a workstation card

Because FE doesn't have this feature enabled, nor several others. It's being added in the driver update for RX's launch.

Retard

It's a workstation care performer because that's what it is

> WHY THE FUCK DID THEY MAKE IT SO IT DIDN'T RENDER WHAT THE SHIT
'Cuz devs are retards

>Muh warkstorton card xD
And yet Quadro cards get the exact same gaming performance as Geforce GPUs at the same clock speed.

This seems kind of implausible, we've been seeing results like FE, both leaked and official, for at least half a year now, hell I think it started in 2016.

Have AMD really never fucking tested these features in any single game? It seems so implausible, what the fuck are they doing? Why are they even showing multiple official demos of Vega running games if they don't have essential features working? They're going to get all these features working in like a single month when we haven't even seen a single controlled demo of them working this late in the game?

>I don't know how driver development happens
Honestly your post is barely worth responding to when its clear you're so grossly stupid. All of the tech press knows that Vega cards in the hands of the public know are using beta drivers without certain features enabled. This isn't a mystery.

Rasterization based rendering has no productive use. Gaming is a time and money sink and subtracts value from the global economy as a whole. No non of that performance matters in the grand scale of things.

>muh beta drivers
If you use Vega you're a beta alright

This wasn't even half as clever as you thought it was.

What is Steve's background (the GN editor?) Engineering?

Wouldn't this screw shit up for reflections or no?

>i wonder what AMD would do with massive resources like Intel has.

Suffer from hubris for being top dog and do shady shit to the consumer while cranking out inferior product. Like Intel currently is.

It's just the nature of monopoly.

if they're using really shitty reflection calculations, maybe.

Did none of you fuckwits even notice it says "Under embargo until June 29, 2016 at 9 a.m EST."

>2016
TWENTY FUCKING SIXTEEN.

yeah its interesting i saw the interview seems like they found a way to cull literally everything if they so wish without a perfomance hit

if its nvidia game probably they have an underground ocean of triangles like crysis 2 LOL

>MSAA
>Rendering
>Zero area triangles

Maybe if you've got shortbus high school students that are making these scenes badly.

Neither does watching sports and so many other things

Better test Vega with AA turned off.

this is nvidia's way of fucking up amd cards

Where did this image come from? I didn't see this one in the launch slide deck on TPU, the only one with graphs and culling was that one with "Vega fastpath" that was like 10% faster than "native pipeline"

this is from polaris the same engine is on vega but a lot bigger

150W Nano when?

I feel like this is a thing game engines should be doing, not having to rely on the gpu

That's nothing compared to the concrete barriers.

>prey animals, have their two eyes positioned on opposite sides of their heads to give the widest possible field of view, panics when vision is constricted to only the front

you would let a dev that release a game that has more bugs than hours to play to actually be able to tell the gpu what to cull and not?
especially when you have to deal with shitworks?

Are most of the new features driver dependent or do the game engines need optimization for them? Does this triangle culling happen automagically without developers targeting a specific api?

In the current climate of extremely lazy Devs, no

>admitting they're still behind pascal

It's funny because the features would indicate RX Vega should surpass the 1080 Ti with these features 'enabled' but even AMD are advertising it as a 1080 competitor, clearly the features don't do much at all

It just means Vega is clocked to hell at stock and does fine underclocked, meaning Raven Ridge will be stellar since its clocks are gonna be low.

It means vega is just like the Fury series, the Nano is the highest binned consumer card while the 56/64 are just overvolted to shit because they're bad chips.

devblogs.nvidia.com/parallelforall/inside-volta/

>The new Volta SM is 50% more energy efficient than the previous generation Pascal design, enabling major boosts in FP32 and FP64 performance in the same power envelope.

THANK YOU BASED NVIDIA

how will AMD keep up they just barely caught up with Kepler?

I hope it's priced the same as a Vega 64. I was tempted for an R9 Nano but it had only 4GB vram back then.

Volta SM is 50% more energy efficient

that's what i like to hear.
AMD needs to up its efficiency also.

So this is the way it is meant to be payed?

lol

>Data Center GPU
hwhat?

>>The new Volta SM is 50% more energy efficient than the previous generation Pascal design, enabling major boosts in FP32 and FP64 performance in the same power envelope.

implying that nvidia will enable the full cuda support to gaming cards

keep dreaming

its literally not but ok

you see boris nvidia failed to mention that this 50% on the tensor cores comes without them using the A.I ISA literally having the sm sitting there doing nothing
its like amd saying we manage to achieve 50% more efficiency on the HWS

>zero area triangles
That is THE way to join models in same indexing mode. OpenGL ES 2.0 guide book specifically recommends it.

>this kills the amd

But, how many buttcoins can it manufacture per shekelsecond? Let's keep perspective on our priorities here. Miners are people, too, you know.

good luck pulling shit like this on vega tho

GV100 is also clocked lower, this is probably where all the efficiency comes from.

the real shit will come once nvidia installs a hardware sc back to their cards and their tdp will skyrocket

this is just bait right? nobody is this stupid right?

Imagine you are in the hospital and they are using AMD to do your medical imaging and the GPU just decides not to render your tumor.

yeah they just do this to gimp AMD cards lel

Crysis 2

But that's wrong

Volta already has built-in independent thread scheduling and yet it's 50% more power efficient than Pascal

>Volta GV100 is the first GPU to support independent thread scheduling, which enables finer-grain synchronization and cooperation between parallel threads in a program.

hpcwire.com/2017/05/10/nvidias-mammoth-volta-gpu-aims-high-ai-hpc/

>“It’s actually pretty remarkable to me that we were able to get more flexibility and better performance-per-watt. Because I was really concerned when I heard that they were going to change the Volta thread scheduler that it was going to give up performance-per-watt, because the reason that the old one wasn’t as flexible is you get a lot of energy efficiency by ganging up threads together and having the capability to let the threads be more independent then makes me worried that performance-per-watt is going to be worse, but actually it got better, so that’s pretty exciting.”

AYYMD is getting destroyed in power efficiency

Vega literally kills Radeon market share

Vega will probably be the best performing card on the first level of Crysis 2 by far now.

like all those power point slides - "magic technology no one even thought about, we did it" - 2% increase in best case in games

Because it APPEARS to have 20% less raw memory bandwidth than Fiji, whether it be to drivers being fucked, sandbagging or a hardware flaw (unlikely, they had time to catch and respin already).

You tell me how much of this 20% regression in memory bandwidth will get fixed and I'll tell you how much Vega's performance will improve in games and ETH mining.

Why.
What the fuck.

Yeah, a 25% performance regression in texture fill-rate clock-for-clock vs. Fiji is TOTALLY logical, right?

fiji has double the hbm stacks idiot

>He doesn't know that AMD's primitive discard acceleration debuted on Polaris and wasn't used at all on Fiji.

HBM2 stack has double the bandwith per pin though.

>It's funny because the features would indicate RX Vega should surpass the 1080 Ti with these features 'enabled' but even AMD are advertising it as a 1080 competitor, clearly the features don't do much at all
Gee, I wonder if a seeming ~30% regression in effective texture bandwidth vs. Fiji might explain that?

Wait somebody with half a brain did B3D suite benching?

wrong post, m8

>Wait somebody with half a brain did B3D suite benching?
I found the benchmarks into my own digging into this, they're from a German site.

This has me pulling my hair out because it seems logically impossible that you would design a higher clocked version of Fiji to have LESS memory bandwidth, and yet there appears to be a massive memory bandwidth regression. You would also expect a large bandwidth savings from primitive discard and DSBR, yet RX Vega 64 appears barely faster than Vega FE.

So either the drivers are/were totally fucked or RTG is sandbagging like crazy, or it's some of column A and some of column B.

>So either the drivers are/were totally fucked or RTG is sandbagging like crazy
It's both.
Pro drivers are ready though, they actively demoed WX9100 and SSG in their SIGGRAPH booth.

I dunno about vega but drivers for AMD over the last like 2 years have been targeting specific software (games)

Vega kills everything, including your power bill.

Based Raja.

an entire FUCKING SEA under each level

>magic technology no one even thought about, we did it

that's not what this video is about

not rendering non-visible scenes is an old idea

Either Vega's memory controller has the same issue that effected Fiji's effective vs peak bandwidth, or current drivers were so cobbled together that things are only in a barely working state.

As i said, WX9100 and SSG are working nice and dandy.

considering the hardware side of things is held together with paste and popsicle sticks I wouldn't be surprised if the software side was similar

Normal shaders can't just write to random (sub-)pixels. UAV shaders can, but that's not really relevant here.

The devs aren't lazy, they are cheap. NVIDIA's middleware is a good deal, that it's to designed to fuck AMD architecture and allows nearly no way to remedy that for the game developer ... oh well, it's free. Win some, lose some.

Yes, this is how CryEngine works ever since first FarCry

They're talking about the "muh big die" version for AI research that costs almost as much as a small house. How much of that ends up in the consumer product out next year remains to be seen.

vega will compete with the gtx 1160

That's horseshit unless you're retarded and expect 60%+ performance from GV102 without a dieshrink