How do you gauge the performance impact of this? Manually setting tessellation to 8x or something?

How do you gauge the performance impact of this? Manually setting tessellation to 8x or something?

Other urls found in this thread:

hardwareluxx.de/index.php/artikel/hardware/grafikkarten/44084-amd-radeon-rx-vega-56-und-vega-64-im-undervolting-test.html
tomshardware.com/reviews/amd-radeon-vega-frontier-edition-16gb,5128-6.html
twitter.com/AnonBabble

You don't.
Tesselation is not the only fuck use of geometry.

Incidentally whats with the undervolting performance?

Did AMD choose the wrong power settings as default?

No, this has been a thing for 5 years, gamers always get the worst silicon.

This is the most confusing GPU launch since forever.

It wouldn't be if AMD didn't launch 5 different lines using Zen cores this year.

Scarce resources.

Well they have a LOT to show, so why not?
And new shiny Radeon Pro's are ready to fight Quadro in several markets, mainly VFX one.

>AMD is still trying to figure out how to expose the feature to developers in a sensible way. Even more so than DX12, I get the impression that it's very guru-y. One of AMD's engineers compared it to doing inline assembly. You have to be able to outsmart the driver (and the driver needs to be taking a less than highly efficient path) to gain anything from manual control.
Kek.
Now that's what i call good engineering.

>Did AMD choose the wrong power settings as default?
Yes.
hardwareluxx.de/index.php/artikel/hardware/grafikkarten/44084-amd-radeon-rx-vega-56-und-vega-64-im-undervolting-test.html
I hope some company will be sensible enough to lower the default voltage.

Well this is just silly.
Shitty default voltage profiles and now an entire uarch feature disabled.

You mean utterly retarded engineering.

Why though.

just hurry up

fuck, I should've bought Vega 64 at MSRP when I had the chance.

This thing is gonna fucking get a price hike by retailers because it will delete 1080Ti from existance

AMD is sandbagging, friend :)

retard here, does this mean the Wait for Drivers™ is not a meme? will we see the Vega crushing the 1080 or what exactly?

>retard here, does this mean the Wait for Drivers™ is not a meme?
Yes, literally AT's main editor tells you to Wait™ for Crimson Magick™ edition.
>will we see the Vega crushing the 1080 or what exactly?
Short answer yes.

it will affect everything, you could arguably count how much triangles vega pushes right now per op, but we can't know how much it will push with a shim
white paper says 17 per op, but it wont' be the case because it's not direct use, it has to go from vertex->primitive process so who knows how much, maybe 11, maybe barely anything

noice, thanks user.

...

>Wait for Drivers™
This has been a thing for 6 years, Tahiti got a 20%+ boost from magic drivers in several months

>wait for Vega
>wait for drivers
>wait for Navi
>wait for waiting
>wait

or just buy vega now,if you can for good price it's possible in some countries, and enjoy free performance upgrades over the years
it's already 1080 performance will get only better

It's literally 17 through shim. They have troubles exposing it directly to devs.

17 is through manual control as I understand it, if you optimize for the arch
how automatic control will be doing it anyone's guess

There's no "manual control" now. Ryan explains why it's borderline impossible.

Waiting for Vega drivers™

Just once, just fucking once, will AMD quit hiding their drivers!

You people expect the same drivers on launch from AMD as you do from Nvidia, while ignoring that RTG is only 3000 people with focus split on APU, while Nvidia is 10000 dedicated to GPUs only.

Not happening.

Well looks like their Pro drivers are working nice and dandy right now.

does NGG work in pro drivers? are there any confirmations?

SPECapc benches from german THG show exceptional geometry perf in the likes of Creo.
>tomshardware.com/reviews/amd-radeon-vega-frontier-edition-16gb,5128-6.html
Decent for a fresh uarch.

>100% not enabled in any current public drivers
>ANY

but it's by and large the only one that will produce sub pixel geometry so long as the game's lod is worth a damn.

I hope all hell will break lose when it works, spectacle will be fun to watch.

not sandbagging, just their low budget and abysmal gaming sales from the 4000-200 are showing, getting the pro drivers working was important, getting the gaming drivers, well shit, no one buys amd gpus even when they are better.

currently the an undervolt oc of a 56 will be margin of error as good as a 1080, the 64 is also margin of error a 1080, right there is the bottleneck, how far ahead the 56 and 64 will go when a bottleneck that currently allows them 4 when they were designed for 17 is anyone's guess, but lets take amds words when talking to people who can and will sue if they are lied to with some weight 'amd is more than competitive given price' and amd priced the cards at 400 and 500 and 600$ respectively, currently the only one that is more than competitive is the 400$ one.

I am not expecting miracles here, but I am expecting something.

If you couldn't figure it out yet, AMD wanted to fix this in a driver update and magically bring up performance while bringing down power usage. But some faggot had to figure it out too soon.

that was also a 100% new gpu arch, I am fairly sure vega is just a completely reworked version of the same arch with a focus on bottlenecks. Ironicly, currently being heavily bottlenecked by drivers.

5 likes of zen...

you got the 7 the 5 and the 3, then you have thread ripper and epyc, you will also have the laptop one and possibly the apu around christmas

Vega is the biggest change since GCN1, and don't forget that Hawaii(GCN2) got a pretty big boost in performance over its life.

Ryzen 3-7
Threadripper
EPYC
Mobile
Pro
Desktop APU(next year)
5 different lines this year sharing resources with GPUs

Atm every fucking Radeon everywhere in the world is sold out so why wouldn't they make you wait? It's shitty move for customers but on the other hand: money.

Again fuck you miners.

no shit, but it's almost all bottleneck alleviation, something gcn had massive problems with, and if the alleviation works, they can massively scale for navi, as in, mcm without hitting a 'not all shaders are used' problem.

Tahiti had no bottlenecks.
It was also the only one designed by former ATi (canada) team

That's exactly the point of Vega.
Also Vega is exactly the opposite of Paxwell: it goes tall versus wide.

New Zealand confirmed best GPU

Neither had Hawaii.
It's Fiji that was a bottleneckfest.
4k ALUs@4tris/clock was not the best idea i guess.

I can send you vega 64 for 52k rubles, want it?

>пынeбyмaжки
Do not.

the massive bottleneck for tahiti was not present in its first go, the arch was meant to scale but it largely did not scale as far as it was suppose to before full utilization wouldn't happen, see the 290-fury x and current vega

the gpu also had the worst bottneck possible, shitty devs.

see amd uses a hardware scheduler, something a dev needs to program for and around to get the most out of it, while nvidia has a far more flexible but resource intensive software one. nvidia is able to wildly change how the shit runs because it can do it in software and if something breaks they can fix it, in amd's case, when shit works, it works SO much better then nvidia will, but when it doesn't they take a hardware hit. games have been able to multi thread since the dawn of dx11, how many games are properly multithreaded? nvidias gpu forces threading though software, while amd cant.

I would call shit developers a bottleneck that nvidia dealt with on their own, and amd is unable to.

Having only 4 front-ends and nothing else seriosly limits the ability for uarch to scale.
Vega's idea of scaling with ALU count is nice.
Quite an elaborate way to cure inherent bottleneck of GCN.

if they scale near perfectly, this is assuming that 17 happens and it's not just a peak theoretical, than a 16,000 shader gpu would be possible. in bad games it would be bottlenecked a bit, and in well programed ones it would hit dam near a hardware limit bottleneck.

The other bottleneck people ignore is that CU utilization, which is far from stellar.

In short, AMD has issues keeping the cores fed.

IWD fixes that.

And when it's ready volta comes and it's Assfuck Maximizing Devices

>bigger dies
Lmao.
Its not like...
...hmm...
...maybe AMD can launch a refresh with biggur dies too?

>volta comes

Q2 2018?
If anything that's closer to Navi

>AMD bigger dies
Everything is pointing to them not being able to do that. Zen is all 200mm dies, polaris was three small dies as well. Only nvidia has a power to put out a full product stack of dies last year, I know it was the big 14nm refresh, but it was 6 seperate gpu dies in a year which is a hell of a lot, even if it's all the same product with bits chopped off in design.

>Zen is all 200mm dies
Because they don't need bigger ones.
>polaris was three small dies as well
Because it doesn't scale.
Vega does.
There's totally going to be big buff FP64 Vega.

This. Gaymers are the last priority when you have limited resources.

If the gpus actually sold, they would be top priority, but considering gamers refused to reward amd for being the best and at a cheaper price point, they are just giving token effort anymore because the market for gpus is to big to just say fuck you in its entirty to.

I think it's more a problem of a cost. The smaller zen dies have just been revealed to be 40% cheaper, polaris was targeting $200 msrp etc. Even vega10 is quite far from the reticle limit as well and it's being used as amd's primary hpc accelerator.

The rumours of vega20 fp64 are all on 7nm which again will shrink the die size. But GF's 7nm also has an increased ~700mm reticle limit and now amd has some money who knows what they've got lined up outside of mcm.

Second HBM2 supplier won't be in production until Q4. Performance related feature isn't ready at launch. Another mining boom. AMD being mum market pricing and supply issue, contributing to the price shock. That's planning incompetence, not sandbagging.

AMD uses Samsung HBM for V64 and Hynix for V56

can you imagine amd letting the usage of primitives on the hands of the devs? every single nvidia game would have them disabled kek

no wonder why amd is trying to force everything to be done via the drivers without any dev

It's not about HBM2 supply, heck, that's not the problem.
Wafer capacity is the problem.
You know it's silly, but AMD needs a LOT of wafers, like a LOT, they need enough Zeppelins to satisfy 4 out of 7+1.
GPUs are low priority for now.

the true volta will have hardware sc and proper fp16 support...
as long as they dont have that calling them volta is a bit much

No, it's the other way.
>AMD is still trying to figure out how to expose the feature to developers in a sensible way. Even more so than DX12, I get the impression that it's very guru-y. One of AMD's engineers compared it to doing inline assembly. You have to be able to outsmart the driver (and the driver needs to be taking a less than highly efficient path) to gain anything from manual control.
It's too complex.

we already know that amd wont let any dev do the job
they are giving them an abstraction layer via the profiler but thats it the whole project cars fiasco was an eye opener for them on what is to follow if vega actually delivers that magical 17 number per unit...
not many devs took advantage of the culling mechanism on polaris too certenly not many on nvidia games witcher 3 on x8msaa was catastrophic for amd it was clear that the devs didnt bother at all with amd

>too complex
You mean game developers are too retarded.

I'm assuming that when both automatic primitive shaders as well as manual primitive shaders get introduced, AMD will provide ways to get the primitive shader code that AMD generates for your vertex+geometry shader. At that point, it's simply tweaking and profiling to see what works best.
Even when that wouldn't be the case, we'd still see a list of "best practices" come along when it comes to writing efficient shader code.

>You mean game developers are too retarded.
More or less so, yes.
>AMD will provide ways to get the primitive shader code that AMD generates for your vertex+geometry shader.
According to Rys they are not planning to do that now.
Maybe later.
And totally for the next console cycle.

>launch bad product on purpose so you can patch it and turn it into a mediocre product
Why would they want this? Day 1 reviews affect sales more than any driver update would.

>Day 1 reviews affect sales more than any driver update would.
>no stock
>sales
?

Doesn't matter. In 3 months time when there is stock and someone types in rx vega review on google the results they will get will all be day 1 reviews.

...Unless AMD kindly asks everyone to retest Vega, that is.

they knew it was going to sell out for weeks to come anyway.
Better to take advantage of the mining boom while it lasts that to sit on a stockpile of cards waiting for the right drivers.

everyone will anyway, it's like ryzen didn't teach you anything

You're welcome. Next I'm going to see if anyone can confirm or deny whether Mantor's interview to GN was about PDA or actually about culling functionality in prim shaders.

It was definetly, 100% totally primshader.
PDA is Polaris tech, it won't be discussed.

Yes, that's a reasonable assumption but I want to try and get it confirmed on the record by someone who spoke with Mantor.

Ask curlyhair.

What does that mean, summarised? Can we expect a better performance from the Vega cards, in the future?

FineWine®

>What does that mean, summarised?
GCN is bad at chewing triangles.
This feature (primshader) makes it good at chewing geometry.
It's currently disabled.
>Can we expect a better performance from the Vega cards, in the future?
Oh yes.
They can literally name next Crimson a "magic edition".

am I the only one who thinks gigaprimitives per second sounds fucking retarded

I love your memes

Yes.
Learn what geometric primitive is.

I just feel like you can't affix giga to whatever you want

th.. thanks :3

>They can literally name next Crimson a "magic edition".
not FineWine Edition? they are loosing the meme wars

You can.
Get back to work you lazy pajeet.
I need driver for my future 56 Nitro+.
Whatever.
The age of magick drivers is back, it's almost early 2000's.

DELET THSI

>You can.

Yes, soon we'll measure EPYC performance in dead gigajews per second.

15-30% boost is possible with the meme drivers? Holy shit.

Yeah no. I got myself a 1080. Fuck AMD GPUs at this point.

Yeah.
You just need to read the whitepapers and look at B3D suite results to guess it.
Hi Jensen.
Thank you for magical Titan drivers.

Is that your sad attempt at portraying me as shill?

No, it's my funny attmept at portraying you as retarded shitposter.
Get the fuck out of this thread.

Funny how mad you brandcucks get. Do you also have a Vega dildo or buttplug?