How old were you when you outgrew nvidia's shilling?

How old were you when you outgrew nvidia's shilling?

Other urls found in this thread:

overclock.net/t/1605674/computerbase-de-doom-vulkan-benchmarked/220#post_25351958
3dmark.com/spy/25459
3dmark.com/spy/25392
3dmark.com/spy/24928
pcper.com/reviews/Graphics-Cards/3DMark-Time-Spy-Looking-DX12-Asynchronous-Compute-Performance
twitter.com/SFWRedditGifs

RIP.

>mfw

>Async is so insignificant that you can just give it to the CPU
hahaha holy shit
Fucking AMDtards and their snake oil

4 years ago I decided to go back to Nvidia because it looked like they where offering the best bang for the buck of near top end cards.

Since then there has been nvidia increasing price points leaving upgrade paths undesirable, mid-end cards at high end prices then real high-end released afterwards fucking over early adopters, buggy drivers on release, gimping drivers for EOL cards to push new stock, anal rape with g-stink pushing monitor prices up while freesync does the same thing at no cost, gamewrecks titles, trying to patent the GPU (nvidia are a bunch of windowlickers for that one) and AMD cards I could have got at the time soon became much faster and a better longtime buy.

I've learnt my lesson and returning to AMD with a 480. It's enough for 1080 and any future dx12 games. By the time VR or higher resolutions are worth investing in, AMD will release their high end cards and i'd gladly give them money again knowing I'll get a quality product that will be well supported for many years to come.

Never. :^)

this desu

no no they dont use the cpu
its actually something worse than this..
if you run codexl while using the timefly you will see something very strange

that test isnt running async compute+ graphics like dx12/vulkan wants and what gcn is built for(mostly)
what you see is paraller executions+ graphics+compute
that is why maxwell cards instead of actually showing regression they are gaining
people on ocn and beyond3d already shit at 3dmark for this its basicly running the only way that nvidia can do "async" aka preemption on the cards its essentially built for nvidia

fuck off pooinloo shill

overclock.net/t/1605674/computerbase-de-doom-vulkan-benchmarked/220#post_25351958
kek go die its already getting traction to shit on their faces once more

How long until the 1060 replaces the 980ti?

>that picture
lmao

never because maxwell is very similiar to pascal, unlike kepler which was completely different

shill/10

Almot the same CPU load...

Holy shit. I knew 3dmark was jerking it with nvidia...

nice and im a ufo

You guys know that time spy uses concurrent execution and not parallel execution right?

They're using concurrent execution to put pascal in best light possible, if it was parallel execution, pascal would of been BTFO.

thread/

there is a very big difference between synchronous and asynchronous compute you know..

>2008 Novidya GPU
>still supported on GNU/Linux with nouveau drivers
I'm not becoming an AMDrone any time soon.
I don't care about newest gaymen.

how old were you when you litterally took the big red cock up your anus you fucking faggot

here are my results with time spy @ 1080p with my strix 1080 (non oc edition)

async on:
3dmark.com/spy/25459
Graphics Score: 10,782
CPU Score: 5,405

async off:
3dmark.com/spy/25392
Graphics Score: 9,965
CPU Score: 5,387

here is the default 1440p bench:
3dmark.com/spy/24928
Graphics Score: 6,932
CPU Score: 5,451

i see no drop in cpu performance with async on. if anything, one can claim i saw a drop with it off.

Performance Results – Testing Asynchronous Compute

One of the more interesting aspects for me with Time Spy was the ability to do a custom run of the benchmark with asynchronous compute disabled in the game engine. By using this toggle we should be able to get our first verified data on the impact of asynchronous compute on AMD and NVIDIA architectures.

Here is how Futuremark details the integration of asynchronous compute in Time Spy.

With DirectX 11, all rendering work is executed in one queue with the driver deciding the order of the tasks.

With DirectX 12, GPUs that support asynchronous compute can process work from multiple queues in parallel.

There are three types of queue: 3D, compute, and copy. A 3D queue executes rendering commands and can also handle other work types. A compute queue can handle compute and copy work. A copy queue only accepts copy operations.

The queues all race for the same resources so the overall benefit depends on the workload.

In Time Spy, asynchronous compute is used heavily to overlap rendering passes to maximize GPU utilization. The asynchronous compute workload per frame varies between 10-20%. To observe the benefit on your own hardware, you can optionally choose to disable async compute using the Custom run settings in 3DMark Advanced and Professional Editions.


>pcper.com/reviews/Graphics-Cards/3DMark-Time-Spy-Looking-DX12-Asynchronous-Compute-Performance

better than a green cock :^)

NVIDIA BTFO!!!!! LONG LIVE TEAM RED!

GO GO GO CURRY

AMD should just change their logo to "The way games are meant to be waited for."

First off, what time spy does is synchronous compute, not asynchronous.

They're simply filling in the utilization gaps and nothing more. They do not execute compute and graphics tasks at once in a parallel manner because if they did, pascal would get shredded.

first off, you don't know what you're talking about

man amd shills will shill

even though time spy showcases amd's lead in async performance, highlights maxwell's lack of async support, and shows pascal's minor, weaker than amd's gains with async, and amd shills still feel the need to tear it apart and throw a chimpout over it.

Normally Pascal has no async gains and Maxwell gets negative performance so the benchmark is doing something wrong

first of all yes when you see maxwell cards GAINING instead of regressing it means that they dont use async compute at all every god damn game out there is showcasing this and yet spy says otherwise..
guess what is correct..

my wife's son still thinks nvidia is better.
this is exactly what is wrong with the world.

You tell me that I don't know what I'm talking about without an argument as to why I'm wrong.

Then you proceed to call me a AMD shill because I don't agree with you.

Don't reproduce.

Dude did you have a stroke while typing this? Leaen English mate

piss off
better dead than red

did anyone really expect a different result than your pic related? they gotta use software, what would they use if there is no hardware implementation? magic tricks? mmm thinking about it, they might, they are used to performing that kind of "illusions" on their reveals/announcements and benchmark results

payworks

I want to post that forum post on 3dmarks forums and see what they say about it but their site is down rather annoyingly. I am going to save that post link and as soon as 3dmark is back up post it and see if anyone responds (doubtful but worth a shot). See if they deny it (most likely).

This is why we can't have closed source benchmarking software. I really wish someone would make an open source benchmark.

>CPU score varies by 2%

NVIDIA IS PUSHING ASYNC TO CPU GUYS! SHILLLL NVIDIA IS SHILLINGGGG!

this is not async

this is what nvidia calls
"dynamic load balancing"
they basicly fill the idle workload instead of spreading it on the cores thus making even cards like maxwell 2.0 that is known to regress to actually gain perf...

>Did not read

overclock.net/t/1605674/computerbase-de-doom-vulkan-benchmarked/220#post_25351958

3dmark is only useful for testing your own card for overclocking and comparing like for like. Using it to compare against other brands is meaningless.

They took 2% off CPU to get 5% on their shitty gpu

>oh look another similar thread with the same similar novidia praise posts
NVIDIOTS ON SUICIDE WATCH

Same slimy nvidia shit, different generation.

Never used them

This