AMD DX12 is BEST DX12

>Mankind Divided released DX12 patch showing better AMD performance
>MUH AMD REIGNS SUPREME TAKE THAT NVIDIOTS!

>frametimes on AMD is a clusterfuck
>I-IT'S A BETA PATCH S-SO IT"S NOT VALID

Other urls found in this thread:

richg42.blogspot.nl/2014/05/the-truth-on-opengl-driver-quality.html
pcgamer.com/total-war-warhammer-dx12-boost-for-amd-still-cant-match-nvidias-dx11-performance/
twitter.com/NSFWRedditImage

DELETE THIS

Oh. Okay sweet dude.

When are frametimes not an issue with amd cards? They seem to suffer from this shit every other driver release.

I still want a 480 but I'm not deluded enough to think the drivers are without issue.

The once crossfire only exclusive stutter is now on single GPU as well! Good Job AMD!

Apparently the amdpajeet squad is on break or fired.

What the hell is a frame time

The time it takes to put a picture of your graphics card in a frame to put on your wall.

It's a function of microwave time.

>Go check DX12 patch for the game
>Exited for performance boost for CPU
>DX12 has lower FPS in every scenario
what is this?

Frame render time.
Why they call it Frame time is beyond me.

Alright so why is this a problem when your monitor's refresh rate is beyond both nvidia and amd? Say if you're on a 60hz monitor, you need to render and push a frame every 16.67ms. Both cards stutter. Some more than others but I wonder if the time difference materializes irl and doesn't just get discarded with a new frame replacing it? Besides that, how is this the case when they produce over 60fps?

I'm wearing two pairs of pants.

FPS is an averaged out measurement of performance which hides lag spikes such as in the op picture.

Frame time is the time it took to render a frame on a per-frame basis which does not average out results which gives a much better indication of what is seen on screen.

...

>what is this?
DX12 is lower level than DX11.

With DX11, your GPU vendor's driver optimization team is in charge of writing and optimizing the shaders for the game.

With DX12, the game developer is.

Who do you think does a better job at it?

The game developers know their code better than anybody.
GPU vendors know their hardware better than anybody.

But given that DX12 gives very intimate access to the hardware, I'd say it's best that the game developers themselves optimize them.

>But given that DX12 gives very intimate access to the hardware, I'd say it's best that the game developers themselves optimize them.
Yes, and they suck at it. There are a few issues:

1. Game developers are not exactly the world's most shining example of competent programmers, and they don't understand the hardware as well as the driver team does

2. Even if the game developers end up optimizing, they will probably only optimize for a single platform (rather than having dedicated code for different vendors and GPU families), which means that at best you're going to get good performance only if you use the same GPU that the game developer used in his development system.

I fully expect performance with DX12/Vulkan etc. to be very hit and miss. Some games will perform better, some games will perform worse.

If anything, AMD is only company to really benefit from DX12/Vulkan because their driver team is shit.

1/framerate

why arent they using vulkan i dont want to install winblows 10

Enjoy your Microsoft anti-competitive market tactics.

There are plenty of DRM-free Linux games available. You can survive without Mankind Divided. (whatever that is)

>Say if you're on a 60hz monitor, you need to render and push a frame every 16.67ms
If your card is taking ~60ms to push a frame at 60fps that means you are going to skip three perfectly rendered frames, because your card is still rendering frames but it needs to push the rendered frames into the frame buffer for the monitor.

How is this a problem when the range is running at >60fps?

AT 60FPS it means its rendering and pushing frames every 16.6ms.

It's NOT possible to render at 60fps (measured every second, not throughout the game/benchmark) when some frames take ~70ms to render (some 4-5 frames at 60hz) when at the same time no frame is rendered significantly below 16ms (even the 1070 in this case is ~20-15ms)

No, it's not 1/X, it's 1/[something that isn't X]

In my experience these spikes are caused by a slow hard disk.

Framerate, or FPS, is the number of frames rendered on that second. This part is clear I think.
Frame time is the time it takes to render a specific frame. Let's assume 60FPS. This means in theory, if every frame took the same time, that the frametime for each frame would be 1/60. The problem with stutters is when a frame takes way way longer to render than the others. It doesn't affect FPS by a lot since it's just 1 frame, but it does affect fluidity. FPS doesn't perfectly measure fluidity, if you rendered 300 frames on that second and 1 took 0.3s to render, you're going to notice stutter despite having fuckhuge FPS.

>AT 60FPS it means its rendering and pushing frames every 16.6ms.
That's just it, it doesn't. On average that's the time, but it could be way lower and one frame have it way higher.

Also don't forget that part of nvidia's grand strategy is to train people on nvidia hardware and embed them in game development, basically acting as nvidia evangelists within those houses.

if you collect a resume from all large game studios and scan for things they have in common, a high percentage of them have nvidia background, enough to be significant.

it's not illegal, but it's dodgy as fuck.

Exactly but the graph doesn't show anything that can make up for that since at the lowest it's some 10ms which can't compensate for all those 70ms frames.

The amd stutterfest ride continues

>jumps everywhere over 50ms

Holy shit, what a fucking mess.

>preview alpha
>REEEEEEEEEEEE

I am done with AMD, there is nothing they can do right. They promised "good 99/149/199$" GPUs but they delivered bad 149/199/249$ GPUs.

DELET

Yeah, and I fully expect the DX12 investment to bite AMD in the ass when game devs start optimizing their games exclusively for nvidia again. Pretty soon AMD will lose the performance boost they're getting from the even playing field and the status quo will be back to normal, except arguably even better for nvidia.

WOAH THERE, Don't you know you might suffer from heat stroke wearing a hat like that while running an AMD gpu indoors?

> AMD hardware generally pushing out frames faster
> occasionally taking a multi-frame lunch break due to pajeet-tier resource management in drivers

some things never change, eh?

Not him but I've been saying this for ages. Since dx12 requires a lot of hardware level optimizations to work efficiently, nvidia don't even need to use gameworks middleware anymore. They can literally tell devs to fuck off with the amd optimizations and focus on nvidia hardware. It makes sense because nvidia has the majority of the market share so the devs will have more incentive to further optimize for nvidia gpu than the minority who use amd gpu.

Good times are coming

GPUOpen is a good thing in principle to replace most/all of JewWorks middleware, but we'll see how competently the initiative is managed.

Isn't the point of dx12 to provide a ubiquitous api so that the vendors inplement dx12 themselves and game devs only write one version of the code for both brands?

Nvidia in DX11 outperforms AMD in DX12, and if all DX12 brings to the table is performance, then there is no reason to pick it if 11 performs better.

I've finally gone back to nvidia after a more than decade (had a 128mb 6800, the one where you could unlock the shaders) to a GTX 1070, AMD chips are becoming so unreliable at least for me, constant RMA.

Game devs can barely write proper multithreading code or make a fucking console port that doesn't lag horribly and you expect them to optimize for DX12 properly? Nice joke...

No, that was pretty much the point of DX9 and OpenGL though. DX12/Vulkan is the exact opposite.

The fact that the vendors implement DX9/OGL themselves is exactly the reason why nvidia was always able to compete with AMD despite using significantly weaker hardware, because their driver optimizations[1] have always been significantly better.

DX12/Vulkan will even out that driver advantage for a while, causing AMD hardware to catch up with Nvidia (or perhaps even overpass them). AMD is gambling on the hope that this temporary lapse in nvidia's competitive advantage will be enough to tip the market share in their favor. It's an extremely risky endeaver, and I expect them to fail. Sell all your AMD shares.

[1] Plus just straight up injecting alternate versions of shaders from popular video games to make them better in benchmarks.

NVIDIA

FASTEST IN DX12 AND REAL DX12 SUPPORT WITH FEATURE LEVEL 12_1

traditional APIs provide high-level abstraction to the game engine dev and deal with managing the real GPU resources (primarily memory buffers and shaders) in the background.

the point of DX12/Vulkan is to let/force the game devs to manually manage all the resources so that the driver isn't wasting CPU time trying to guess at what the game really wants for optimizations.

old API drivers generally become a dumping ground for optimizations and bug fixes added by the GPU vendor to cover for shitty and often bizarre programming choices, and the new style in theory forces these changes upstream since the driver is no longer some ornate heuristic layer.

The implementation is honestly fucked though. DX12 on NVIDIA performs worse than DX11. The hardware is literally the same, obviously, so if there's a drop in performance it's 100% on the software side, i.e. the software (DXMD on DX12 + NVIDIA's DX12 driver) is worse than it is on DX11.

>inb4 muh no async
This can be a valid argument for why performance isn't better, but there's no async in DX11 either and that has better performance, so in this case it can't be an excuse. There are also DX12 titles that run better than DX11 on NVIDIA too, like ROTTR.

Imagine you are getting 80FPS on a 60Hz monitor. That's a frame every 12.5ms in theory, so you should definitely have a new frame for each refresh, even if the screen can't display ALL of them. But practice is not theory and in practice you will not have a 12.5ms frame time for every single frame, especially not if the game stutters. Maybe 5 of those 80 frames does not render in 12.5ms, but in 35ms instead. You can still have 80 frames in 1s, but some of those frames can be much, much slower than others and as such cause visible stutter. In my example a slow frame could make the game miss 2 full refresh cycles and the stutters of something like this happening is very noticeable.

This is why FPS is not a very good measurement of how smooth a game actually runs and why things like measuring frame time variance or 99th percentile FPS are much more relevant. It's also why you would generally want to always be above 60FPS on a 60Hz monitor, since VSync and/or other FPS limiting methods can help a lot when it comes to smooth and even frame delivery.

The way I like to describe it is to imagine your CPU understood only JavaScript instead of x86 assembly.

If you want to write a fast program (e.g. say in C), a C compiler would have to translate this into JavaScript, and then the CPU would have to translate that JavaScript back into machine code.

DX12/Vulkan on the other hand is like having the CPU expose x86 assembly directly, and deferring the job of generating this assembly from high-level languages to third-party compilers like GCC, MSVC or other programs that are not provided by Intel.

THE POWER OF ASYNC WITNESSED

Game devs are drooling retarded idiots, I don't want them anywhere near my hardware.

Is this because of the 4gb total on the Fury X?

multi-frame drops pretty much mean stalls in pulling resources from host memory, right?

if AMD manages to survive long enough for Vega to get released, I hope a 16 GB version gets made at a non-exorbitant price to prevent this sort of shit in the future.

Or just get better resource management instead of throwing more fucking vram into it?

This is a DX12 benchmark, where the buffer allocations are controlled strictly application side.

This could be a fuck-up by the Eidos frogs, but it seems unlikely given the 1070 sailing by smoothly.

nah, even the 8GB RX 480 is having a stutterfest vs the GTX 1060 6GB see

It has to be some sort of driver fuckup for sure, just because it's DX12 doesn't mean AMD and NVIDIA are completely off the hook. For instance I was playing DX12 ROTTR on 1080 SLI with 368.95, the game ran beautifully and in some specific scenes I had 50% improved performance compared to DX11. Then I updated the driver to 372.70 and DX12 in that game is now a stuttering, freezing and crashing mess. If AMD is stuttering in DX and NVIDIA is not, it's probably in their drivers, though this most certainly doesn't seem to be a high-quality DX12 release as of yet.

stuttering was a problem on AMD cards well before DX12

richg42.blogspot.nl/2014/05/the-truth-on-opengl-driver-quality.html

A somewhat relevant but interesting read. I find it unlikely most game devs will do a good job of optimizing things on their own without someone from the consoles or hardware part directly helping them out.

Hilariously it's an amd game made with amd's equivalent of gameworks.

Roll back to 368.xx, 372.xx are broken.

Tons of people having issues, including me. 368.81 are the last stable ones atm.

I was just illustrating a point that a DX12 issue isn't necessarily just on the game dev's side using an example. I'm done with ROTTR and 372.70 works fairly well for everything else and has the SLI DPC latency fix.

I'm getting some weird ass corruption and flickering shit in TWWH though, may have to roll back when I'm not lazy to see if an older driver fixes it.

>Believing the unstable stack of shit that is Vulkan/DX 12 hype meme

pcgamer.com/total-war-warhammer-dx12-boost-for-amd-still-cant-match-nvidias-dx11-performance/

>GPUopen
>Open sores
>equivalent to Gameworks
nah m8

fuck off n/v/idiot

...

DX12 is shit

AMD is shit

What else is new

If you fell for the AMD meme, I feel bad for you

>AMD
>Shitty company competing with 2 giants in a dying market

Even if they did make anything good, it would be for fucking nothing
whats the point of AMD?

Oh would you just look at that. It seems like yet ANOTHER one of the AMD fanboys/shills promises did not deliver.

Guys I'm totally, I mean TOTALLY, surprised here.

>Intel moving on to mobile shit
>Nvidia moving on to mobile shit and science/deep computing shit
>AMD still struggles for the dying desktop market at the pace of a fucking snail

literally the biggest joke in tech

This isn't anything new.

I think it's easier to find DX12 games where AMD cards have stuttering issues than it is to find games where stuttering isn't an issue.

The most efficient GPU! 220watts on the GPU alone.

>AMD stutters framerate so that "highest" and "Average framerate" appear higher on benchmarks

It all fucking makes sense now.
This is what happens when you buy from some company engineered by literal Poo in Loos.

>220watts on the GPU alone.

At least it's not off the PCI-E slot alone anymore.

>using furmark to measure power usage

This is pretty much why I always go Nvidia even though they're shitheads, even if I get less frames, at least they're rendered smoothly and reliably.

30 extra frames a second isn't very useful if the fucking game is so stuttery that I cannot even enjoy it.

>75 watts 6 pin.
>75 watts pci-e.
> =220 watts

...

Fun article but it's past its expiration date now.

i have 2 pcs one amd fx 8350 and an i5 the amd reaches 75c all the time and the i5 still has not passed over 50c, i've done replacements on the fan and paste multiple times to no results, never doing amd again. where did it all go wrong?

>"AMD 480 is better in DX12!"
>1060 outperforms it in practically every dx12 bencmark

what's the point of team red anyways?

Lets wait for Vega.

GPUOpen is neither equivalent to GoyWorks due to its open source nature, nor actually even used in this game (CHS shadows aren't part of it). Apart from that, great post!

Every frame times thread I want to just smash op I'm with a mallet.

There's a fucking option called VSYNC in every goddamn game and when you enable it your frame times magically stabilize beyond any possible measure of human perception

A-amada K-k-kokoro

Enjoy your input lag retard.

>buying expensive gpu only to limit and introduce input lag

vsync doesn't fix high frametime spikes at all.

...

Goy works is still a thing, nVida are likely in the process of converting their middleware into DX12/Vulkan usable libraries. GPU open is also a thing though, and we all saw how SIF worked in DOOM. The thing is that most of the developers are building their games to the PS4/Xbone which run on AMD and so does their replacements. These developers will use GPU open libraries in their PC ports to squeeze out performance, in nVida sponsored games in the next year or so they will run goyworks. We will be back to the way shit worked in DX11. The only difference now is that your CPU can do a bit more of the work since it isn't a single threaded clusterfuck doing all of the god damned work.

>stutters hard
>still manages better minimum and maximum framerate
so i guess AMD still wins then.

>whats the point of AMD?

AMD is just there so Intel and Nvidia don't get broken up due to monopoly laws.

There is good evidence that they are actually propping AMD up as a token competitor, keeping them going just so Intel and Nvidia can keep their pseudo monopoly without restriction.

>implying antitrusts would give a fuck if AMD was gone
Nvidia and Intel could say they compete with Qualcomm the same way Microsoft competes with Apple

>caring about minimum framerate when everything is stuttering
AMDrones everyone

if the minimums are above 60 you literally will not see stuttering.

...

It's going to keep falling until this week or so as the market corrects itself back to around $6.00
It's an overreaction to AMD selling more shares to cover their debt.

>There is good evidence that they are actually propping AMD up as a token competitor
waiting for the evidence

The fact AMD stays in business despite fucking up completely 100% of the time

the fact they somehow compete with two corporations worth billions using nothing but scraps

Frames get pushed to your display at 16.6ms multiples. If a frame takes longer to render than some multiple, it'll get pushed out at the next multiple.

VSync on:
If a frame takes 17ms to render, the previous frame will be sent instead at 16.6ms, and the frame that's currently rendering will be late and get pushed out at 33.2ms (or at whatever 16.6ms interval it finished before). So the previous frame gets sent to the display twice, then the new frame gets sent. This is a micro stutter, and some people like myself are ridiculously susceptible to noticing them, even if the average framerate is way above 60.

VSync off:
If a frame takes 17ms to render, the part that took 16.6ms to render will overwrite that part of the last frame and get sent to the display. This generally results in less perceived latency and removes most microstutters, but results in tearing where the last frame and current frame don't line up due to in-game-camera-motion that took place between those frames, with large movements causing more obvious tears.

Atleast amada doesnt have dpc latency issues like every nvidia card.

Wtf I hate and now