FutureMark Async Compute Detailed

futuremark.com/pressreleases/a-closer-look-at-asynchronous-compute-in-3dmark-time-spy

Other urls found in this thread:

youtube.com/watch?v=Bh7ECiXfMWQ
gigabyte.com/products/product-page.aspx?pid=5956#kf
en.wikipedia.org/wiki/Feature_levels_in_Direct3D#Direct3D_12
twitter.com/NSFWRedditImage

so the amd shills were wrong again?

what a surprise

It's imply using a neutral approach to Async Compute and not favoring a specific architecture. They would have to make another benchmark if they want to code for each architecture in which what separates DX12 and Vulkan to previous architecture

what did you expect from a dying breed

they will still relentlessly shill and shitpost though

based 3d mark. even more detailed than I expected
most of them don't even know that async shaders =/= async compute. and they keep telling each other than nvidia can't into async compute because they don't use async shaders... really embarrassing to read

Since AMD shills consider this benchmark invalid, does that means DOOM and every AMD sponsored game(Hitman et cetera) are to be considered invalid too?

DOOM per example uses Shader Intrinsic Functions which currently is only available for AMD, by AMD's own Vulkan extension and many DX12 games not having a rendering path optimized to Nvidia's hardware. this is really obvious when you compare DX11 to DX12 in Nvidia with the DX12 rendered getting lower performance.

nope. it only works one way just like only white men can be racists hehe

also never forget
AMD= ISIS

Did anyone but the most short sighted fanboys think anything else? The steam thread was absolute pain to read.

Yeah I remember that ramadan bullshit on their twitter.

AMD is a fucking sharia company and if you give them any money you support attacks on western culture.

word. glad they closed it, I felt bad for the devs and how polite they were to those 14 year old imbeciles

All DX12 benchmarks that favour AMD are pretty much invalid now

Indians hate Muslims though.

Not the Muslim ones.

Those are Paki's dumb retard

Indians that post ramadan are muslim.

Still no AMD Shill, are they having a meeting now?

>Implying pajeets and pakees are ANY different
Hello rakesh

>55656782
>also never forget
>AMD= ISIS
>

What was that?

>also never forget
>AMD= ISIS
>

What was that?

>AMD= ISIS
BASED AMD

It's legit by the way.

They took it down when people freaked out.

Throwing people in jail if they eat or drink during daylight in public is a celebration in sharia land.

It's going to be gimpworks all over again. Some games fully utilising AMD's async - others butchering AMD's performance by adding some DX 12.1 features that Polaris doesn't support

AMD subreddit is 70% cancer.

>equates adding DX12.1 features to gimpworks
>doesn´t equate async to gimpworks
>amd shills are the worst because they love the microshaft

damn you Nvidia shills came out in full force for this one.
Futuremark press release actually confirmed what AMD shills were talking about.

>The implementation is the same regardless of the underlying hardware. In the benchmark at large, there are no vendor specific optimizations in order to ensure that all hardware performs the same amount of work. This makes benchmark results from all vendors comparable across multiple generations of hardware.

Added emphasis in the fact that the lowest common denominator Directx 12 code path that they deployed is the one that follow Nvidia's Guidelines.

Why are you butthurt that a dev doesn't specifically optimize for certain hardware config? Everytime one optimizes for nvidia you grab your pitchforks, but now it's suddenly bad? Shame.

make scorpio a dual gpu solution.
Literally every single problem that AMD is facing right now in regards to software (and especially lack of improvement for their archs) will be solved for them by game devs,m in record time, at a low cost (free man/hours), and with possibly novelty solutions.

those fuckers gonna bleed to get VR at 90+ fps on dual 480x's.

>t a dev doesn't specifically optimize for certain hardware config?

Your post didn't make much sense; but the fact is that Futuremark did exactly that: they optimized for a certain hardware config.

Both AMD and Nvidia's Pascal can see real performance increase from that chosen path, but while Pascal cards are being utilized 100%, software and hardware alike, AMD cards have been left with idle portions during a stress benchmark. And no, that's not AMD's fault, inb4.

If it works on both graphics architectures, how is it optimizing for 1 vendor? Do you read what you post?

If it only worked on nvidia or AMD, then it would be vendor specific optimization.

>DX 12.1 features that Polaris doesn't support

So hardware conservative rasterization and rasterized order view? AMD have already said they could implement these in software if necessary. Nvidia's async compute is handled in software, so it shouldn't be a problem, right?

You'd hope so. But we all know what happens when you use the dreaded words AMD and software in the same sentence.

Optimization isn't functionality. It can run on both (like how HairWorks runs on AMD) and still have poor performance on the architecture the code isn't optimized for.

Yes it does work for both and both companies see performance gains.

But you are missing the fact this is Nvidia's implementation, from their guidelines: youtube.com/watch?v=Bh7ECiXfMWQ

So, it was tailored to fully utilize their hardware and software (even though, as I said both vendors can benefit) while leaving one very important feature from their competitor untouched. In Dx 11 that would make sense, but it goes against what Nvidia, AMD and MS coined together as best practices for Dx 12: IHV specifics, that's the whole point of 'being close to the metal'.

Hairworks runs poorly on AMD because of hardware limitations. Games that use asynchronous compute run poorly on NVIDIA because of hardware limitations.

Does that feature of AMD work on NVIDIA and Intel hardware? If it does, then it's not a vendor specific optimization. If it doesn't, it is. It's black and white.

Also I can tell you with certainty that developers don't want to be all that close to the metal. There is a degree of abstraction they want, because it's painful as hell to code two completely different codepaths for AMD cards from year 2016 and AMD cards from year 2018 and then NVIDIA and Intel cards too. Abstraction exists so you can run the same code on hardware from all vendors. You obviously can optimize your code to run better on one vendor or the other or between different generations and I'm sure actual games are going to do this. Specially if they are sponsored by either AMD or NVIDIA.

and Futuremark's hypocrisy is evidenced in the following:

Dx11
>Tesselation is a good way to measure how a GPU can perform under the most stressful scenario possible.
Fanbois should take that into account and realize the necessity to always push the technology forward. A good benchmark tool must always aim for the most dire circumstances independent of each competitors own strentghs.

Dx12
>There's a need to remain unbiased and not play favorites. The code path chosen for Directx 12 deploys the lowest common denominator between vendors aiming for the best compatibility between all currently available cards ensuring that no one gets an unfair or perceived advantage; as is evidenced by the fact that all of the tested cards have shown noticeable performance gains.
Fanbois should take in account the difficulties of software development accross varied hardware and it's economical unfeasibility. A good benchmark tool must always aim for a good balance between the strentghs of each competitor.

AMD, Nvidia and MS joined hands, sang kumbaya, shared a token, an replied to you a long time ago: stick to Dx11 if you don't wanna do that.

>A good benchmark tool must always aim for the most dire circumstances
You do understand that they can completely cripple a card to a stop if they did that?

Yet despite that, their API isn't that close to the metal.

This shit Athlon vs Pentium IV all over again; gpu edition.

All companies do that mind you.

This article is a mistake, next time Futuremark, make it AMD focus so they stop making the threads all over the net a salty cancerous place

the entire point of an API is to avoid things like that. if they truly wanted devs to implement seperate renderers for each ihv we would just be writing code that compiles to the GPU's ISA directly.

in fact, one of the stated goals for vulkan was to be abstract enough that the same code written for a desktop gpu could run just fine on a mobile gpu (completely different paradigm since most mobile gpus are tilers)

>while leaving one very important feature from their competitor untouched.

learn what the word 'asynchronous' means. nvidia, intel and amd could all decide one day to run commands in their compute queues one after another with no concurrency and still be asynchronous within the specs definition.

you are clearly just crying because 3dmark did not give special treatment/advantages to AMD hardware.

>So hardware conservative rasterization and rasterized order view? AMD have already said they could implement these in software if necessary.

would mean quite a few more gpu-cpu synchronizations per frame. it would destroy FPS on AMD hardware if they tried to implement conservative rasterization in software.

The entire point of DX 11 is to provide a high level abstraction layer.
The raison d'etre for Dx 12 is to bring back a low level abstraction layer.

Guess what? There's more work to do. But it's better overall, for everyone... Except code monkeys pulling overtime. They'll get richer more quickly, but they'll probably die sooner.

cute, so the tailored part that you left from my edited quote doesn't matter at all?

right.

>The raison d'etre for Dx 12 is to bring back a low level abstraction layer.

no, dx12 and vulkan are both high level abstractions of low level constructs. the goal is not to be 'close to the metal' and architecture specific codebases. those are still meant to be part of the driver's work.

>But it's better overall,

for who? certainly not for businesses developing games and game engines. all having to maintain seperate renderers for each brand/arch would mean that you need 3x the time and money to build and maintain implementations for nvidia, amd and intel.

so you're saying that they 'tailored' to nvidia's hardware by refusing to give AMD special treatment?

are you retarded?

no Mr Shill, I'm saying that it was tailored because the implementation itself followed Nvidia's guidelines.

I works for both, and they decided that they shouldn't bother anymore. And yes, AMD cards would get even more performance gains if they had decided to ALSO implement these guidelines alongside.

>I'm saying that it was tailored because the implementation itself followed Nvidia's guidelines.

what guidelines? there's no control over how or when the commands you submit are executed - nvidia's implementation and amd's implementation of 'async compute' exist at the driver level. 3dmark has no control over it.

>And yes, AMD cards would get even more performance gains if they had decided to ALSO implement these guidelines alongside.

so you're saying they should have unfairly tailored to AMD hardware rather than make a vendor-agnostic benchmark? why is that?

you're really good at tautologies.
people at the cubicles adjacent to yours must be really impressed at your capacity for circular references.

i think he is

>i have no response so i'm going to resort to meaningless personal insults

kek

you did it first.

but I was flattering you, not insulting. You're good at your job.

This is like if one company put out a 2.0ghz single core processor and another company put out a 2.0ghz four core processor but 3Dmark decides to only ever use a single thread.

Yeah, he made me reply to the same question worded in 4 different manners. I'm definentely not the brighest one in here.

fell for it hook, line an sinker.

people don't even bother with AMD anymore.
from Gigabyte's site.

Gigabyte's AMD cards even have sli bridges.gigabyte.com/products/product-page.aspx?pid=5956#kf

...

>devs wanted a closer to the metal approach
>they got it in dx12 and vulkan
>now they have to deal with all the available hardware and code specifically for them unlike in console which has a fixed hardware
>tfw devs started dying because of more work
>publishers charge more on their games because of more work

Happiness

amd reddit is just as cancerous
>inb4 hurr reddit die in a fire already

I always like when I take a neutral stand on matter and I'm always anti-fanboy regardless of branding yet get called Nvidia and AMD shill alike. Thanks.

Oh isn't that quaint. Let's see now. It's OK when DX11 forced itself to be adhered to compromising GCN performance. But now we are all 'neutral' with DX12. Nice side step there FM.

Let's all play fair in the sandbox kiddies.

fuck off frogposter Palit nvidia scum

kek whoever designed that page is a fucking dingus.

In the end it's all about raw fps. A benchmark should stress a card to it's maximum performance and measure the resulting fps. It's then down to the GPU (or person tweaking it) to adjust whether it throttles or whatever. I understand software like Furmark is not really a good idea to leave running too long but surely for a 5 minute benchmark run it should be fine to put the pedal to the metal so to speak.

If a GPU supports a function that gives it a competitive edge in that are it should be allowed to fucking use it to it's fullest. It's like kicking Usain Bolt in the nuts before a race because it would not be fair on the other runners.

And then claim that it's multi-threaded because the single core can switch threads.

not nvidia

you're retarded. I'm glad you're an AMD customer

So now that the dust has settled and we know that amd's implementation of async only gives it a 5% or so advantage in performance over nvidia’s without time consuming and expensive hardware specific optimizations. What will the amd users start hyping next as the feature that they believe will finally make their cards not suck?

it's not dx12 without hardware async

>we know that
stopped reading right there

Actually, it's more along the lines of this
>This is like if one company put out an 8.0GHz dual-core processor and another company put out a 2.0GHz quad-core processor but 3Dmark let the firmware of the processor decide how it handled the load.

>87fps on 1060 in vulkan
>114fps on 480
>5%
Get out. You're too young to post in Sup Forums.

I see in analogy form:

NVIDIA 3.1GHz quad core and AMD 3.0GHz quad core with hyperthreading and futuremark decides to not allow full use of hyperthreading

do we have any date for this card ? when will we got the reviews ?

Did you read the fucking article? AMD (among others) read the source and GREENLIT THE FINAL PRODUCT.

>3DMark Time Spy has been in development for nearly two years, and BDP members have been involved from the start. BDP members receive regular builds throughout development and conduct their own review and testing at each stage. They have access to the source code and can suggest improvements and changes to ensure that the implementation is correct. All development takes place in a single source tree, which means anything suggested by a vendor can be immediately reviewed and commented on by the other vendors. Ultimately, each member approves the final benchmark for release to the press and public.

>If a GPU supports a function that gives it a competitive edge in that are it should be allowed to fucking use it to it's fullest.

Okay, let's make our 4chanmark and spend time coding separate paths for each vendor. How do we decide what extent is fair? Do we spend 6 months dev time on Nvidia, then 6 months dev time on AMD? What about Intel iGPUs?

If that feature is supported and advertised as a major selling point of multiple new industry standard APIs then I do believe that it should be supported.

What feature?

any of them

But how do you decide which features you spend time doing vendor-specific optimizations for? And how much? What if after doing those optimizations Nvidia has gained 50% and AMD has gained 15%? Will the world drown in salt?

>vendor-specific optimizations
>ever

But that's what the amdmen are shouting off Mount Stupid. It's not TRUE dx12!! You need to optimize for each vendor! Btw what's a benchmark?

what are you talking about, they want full representation of DX12, not vendor specific tweaks. If DX12 Advertised it, it should use it.

They certainly do want that. Read the steam thread or /r/AMD.

Also, in DX we have these things called feature levels. AMD doesn't even support FL12_1.

>greenlit
They had a vote. Amongst 5 members, all they had was a sure lost vote.

And as an anecdotal evidence, Futuremark is pretty tiny as a company, and one their devs was displaying willy nilly Nvidia fanboyism at OCN forums.

They probably did it for free.

A little more evidence like this and AMD might be able to actually bring a lawsuit against futuremark/nvidia

Why are you listening to retards? Also, Yes full parallel compute, async shaders, and everything in 12_1 should be present and as optimized as possible. Benchmarks are a tech showcase after all.

>They had a vote. Amongst 5 members, all they had was a sure lost vote.
No, it's not a majority vote. It's a "need 100% approval". That's what "each member approves the final benchmark" means.

en.wikipedia.org/wiki/Feature_levels_in_Direct3D#Direct3D_12

See "GPUs supporting as a maximum feature level". If they had went with FL12_1, the benchmark could only currently run on Intel and Nvidia cards.

since you brought up that place that shall not be named:
>It is said that all gymnasts are instructed to recive with a smile the score from the judges specially when they know there is a bias against them. Athletes are told that in order to prevent grater damages as displaying their disapproval would only contribute to further amplify the bias against them. I am just speculating but maybe AMD is confronting a similar situation here so they opted to say nothing and give Futuremark enough rope to let them hang themselves. In other words: By not protesting AMD avoids further bias against them and allows Futuremark to discredit themselves in favor of AotS/DOOM.

I've been quite curious why there were official statements yet. I'm frustrated at tech sites that won't touch this subject -- it's at the very least news worthy --, but quite baffled as to why especially AMD never addressed the issue. Not my post, but it's the best take I saw on it.

How retarded are you?

Any benchmark these days should support every single feature. Reviews should consist of of everything enabled so the consumer can judge what's best for them based on what they play. If I play Paradox games mostly knowing what card is best in that situation including some lame fluff feature like hairworks or whatever AMD has should be made plain and benchmarked.

AMD probably won't say "yeah the benchmark is fine" in a public statement because they think the rabid fanboyism is profitable for them, whether it's based on facts or not.

or, they themselves don't see any issues with it: - as you said.

But still would be nice to have at least that basic statement: 'guise stop raising a non issue', if thatw ere the case.