Gtx 1060 faster than 480 on average

>gtx 1060 faster than 480 on average
>most games tested were dx11
>tested with high end CPUs like 6700k

>rx480 held back horribly by weaker processors, even in dx12/vulkan
>tested with x4 955 and i5 750

So if you're using a weaker CPU you should get the 1060, but what's the point where it levels out?

Would an i5 4460 hold back the rx480 enough to justify the 1060 instead?
How about an i7 2600?

Other urls found in this thread:

extremetech.com/gaming/178904-directx-12-detailed-backwards-compatible-with-all-recent-nvidia-gpus-will-deliver-mantle-like-capabilities
bit-tech.net/hardware/graphics/2011/03/16/farewell-to-directx/1
twitter.com/AnonBabble

FPS depends greatly when using high shadows, view/daw distance, AO, AA

Always opt for a better CPU. i5 can and will hold back either GPU depending on the graphical settings used, regardless of dx12/11 optimisation on draw calls or not.

I have an i7 870 at 4.2ghz, is it time to upgrade? Monitor is 1440p.

real time shadows and reflections destroy AMD performance.

ive never seen an i5 break a sweat with a mid range gpu

You're fine for now.

That's what I thought.
Should be fine for another 2-3 years too at least, as long as I'm getting 60fps without a cpu bottleneck I'm happy.

On the topic of bottlenecks
Will my 6600k bottleneck the Fury X I have coming in?

when we look at picture related with doom - vulkan you might notice something, the 1060 wins with weaker processors, but loses to the 480 with stronger processors.

there is another thing to note here as well. when we look at the x4 955, the gains between the 480 and 1060 are near identical. ~10+ fps increase. move over to the i5 750? the 480 receives higher gains than the 1060. 480 increases by an extra 18fps while the 1060 receives 12fps extras. whats important to note here is that the 1060 received the SAME amount of fps boost with the i5 750 as it did with the x4 955. when we move over to the i7 6700k is when the 480 skyrockets over the 1060. 480 receives 32fps increase while the 1060 receives a small 3fps increase.

when we look at extremetech.com/gaming/178904-directx-12-detailed-backwards-compatible-with-all-recent-nvidia-gpus-will-deliver-mantle-like-capabilities we find that both amd and nvidia receive equal playing field when it comes to driver overhead. amd no longer suffers from extreme driver overhead. both are similar now with the usage of dx12 and vulkan.

so whats going on here?

in doom - vulkan, async is enabled on amd cards and async is used HEAVILY in doom - vulkan. while on nvidia cards async is disabled. when we look at the x4 955 processor, its simply to weak to handle the 480 with async. all its benefiting from is the less driver overhead. the same as nvidia, less driver overhead. when we move up to the i5 750, we see a stronger increase with the 480 than we do with the 1060, 18fps vs 12fps increase. the i5 750 enters a level where its powerful enough to allow the 480 to start benefiting from async. when we finally move over to the 6700k its powerful enough to completely keep up with the 480 processing all of the async and allows it to deliver amazingly strong performance.

case in point, you don't need the latest and greatest i7. all you need is sandy bridge and above. technically you need a nehalem in the 3ghz range.

not by a long shot. 6600k is MORE than enough for the fury x.

Should be fine.
Just oc that mothertucker

>when we look at extremetech.com/gaming/178904-directx-12-detailed-backwards-compatible-with-all-recent-nvidia-gpus-will-deliver-mantle-like-capabilities we find that both amd and nvidia receive equal playing field when it comes to driver overhead. amd no longer suffers from extreme driver overhead. both are similar now with the usage of dx12 and vulkan.

also to add amd HAD to make their drivers multi-threaded for dx12 like nvidia did for dx11. they had to because dx12 REQUIRES the drivers to be multi-threaded for to work. so overall amd's driver overhead in dx12 is nearly identical to nvidia now.

see
more importantly:
>you don't need the latest and greatest i7. all you need is sandy bridge and above. technically you need a nehalem in the 3ghz range.

Why couldn't they make their dx11 drivers multithreaded too?
For a company that thought to rely on multithreading as the future with their CMT CPUs, that seems like a massive oversight to me

Wish they'd tested it with an i5 2500, 3470, 4460, and fx6300, fx8350 too.
Also 880k.

I appreciate that they did what they did, but I wish they did more

it was mostly because the way gcn works. gcn is a parallel architecture. dx11, even with multi-threaded command ques, is still heavily single.

>extremetech.com/gaming/178904-directx-12-detailed-backwards-compatible-with-all-recent-nvidia-gpus-will-deliver-mantle-like-capabilities
>See how, in DX11, the entire workload is hanging on a single thread with extremely low utilization on the other threads? That’s a problem — with the kernel-mode driver running on the same thread as the game and the D3D layer, there’s just not much for the other threads to do. The second graph shows how, by splitting the workload more evenly, the game can hit much lower latencies. Better latencies translates directly into higher frame rates.
>This pair of screenshots from 3DMark 2012 further illustrate the difference. Total CPU time is dramatically reduced in DX12 by efficiently reallocating data across all cores.

dx11 multi-threaded commands essentially takes single threaded code and try to multi-thread it, and for amd there are not enough benefits for them to do so. It means they have to re-write how drivers work per game engine and in the end parallel things like async still wouldn't be supported. it wasn't worth the effort as they built hardware capable of handling all of that itself and dx11 didn't support. such as computations and graphics processing at the same exact time. it was more beneficial for them to push api's like dx12 & vulkan and create mantle to showcase it.

as do i. i don't understand why they went from a 09/10 processor all the way up to essentially 2015/16 processor. little more than half a decade of improvements. but honestly, by seeing the leap from the awful 955 (core 2 quad level) to 750 (2.6ghz first gen nehalem), a 2500k should be more than enough.

amd was working on mantle for a long time. years before they announced in september, 2013.

some clues where hinted back in 2011:
bit-tech.net/hardware/graphics/2011/03/16/farewell-to-directx/1

the work nvidia had to do with dx11 to get its multi-threading ques working was daunting. it's very difficult to implement. it took nvidia two years to enable it in their drivers. it also required video game developers to allow the usage of dx11 multi-threaded ques and heavy work on the drivers.

amd decided to instead lobby for better api's and go as far as create their own to showcase the benefits of using a true api built from the ground up to be multi-threaded and incorporate parallel things, such as async.

the result? they got microsoft to release dx12 and gave mantle over to khronos to make an open standard version of dx12 with vulkan.

it paid off, but also caused amd to take a beating for a few years in games that were cpu heavy. amd's lack of dx11 drivers didn't hurt them much in games that were more gpu bottleneck than cpu. but they took deep hits in games that were more cpu limited.

the statement "most games are not cpu limited anymore" wasn't necessary true. amd drivers clearly showed they where, but since nvidia spent two years to implement dx11 multi-threading, that helped lower cpu overhead in these games.

now comes for dx12 adoption and adoption has been growing pretty fast. so far its been helping showcase amd in a much better light. they no longer suffer from driver overhead and they finally have an api that can fully take advantage of all the tweaks, all the hardware available on their architecture, gcn.

nvidia on the other hand got shafted. their hardware isn't best suited for all the parallelism of dx12 and the cpu overhead reduction wasn't that beneficial. they do receive a cpu reduction because dx12 brought more, but wasn't as drastic as amd.

their hardware also cannot take full advantage at all the parallelism offered by dx12 like async. they can only run compute and graphics one at a time, not at the same exact time with dedicated hardware like gcn.

pascal uses a band aid with this with improve preemption (stop one command and switch to another) with dynamic load balancing. under light loads they can receive a small boost, but under mild, null, and heavy, you could see performance regressions. this was noted on the 1080 when running on crazy preset. at 1080p the 1080 receives no boost with async on, and at higher resolutions it receives regressions compared to its dx11 results.

but compared to maxwell, pascal at least can benefit with light usages. one of the reasons why nvidia hasn't fully enabled async support on maxwell and limits it to per application basis. its why in time spy async support is completely disabled by the drivers for maxwell and keplar.

so right now nvidia has to focus on raw gpu core power. it needs strong single threaded performance to switch back and forth fast enough.

take the 1060 for example. it has a base of 1.5ghz and a boost of 1.7ghz standard. it virtually operates at 1.7ghz 24/7. only drops lower when thermals get crazy.

the 480 on the other hand has a base of ~~1.1ghz~~ and a boost of ~~1.26ghz~~. in dx11 all the extra hardware of gcn on the 480, its 36 compute cores and 4 async cores, go unused. so its a battle of raw power and driver overhead. the 1060 pulls ahead thanks to its lower driver overhead and stronger clocks.

in dx12 the driver overhead is eliminated and the usage of async is available. so those dedicated 36 cu's and 4 ace's can be utilized. the 480 pulls ahead of the 1060 in almost all dx12 titles.

even in the famous tomb raider in dx12, the 1060 doesn't receive no where near as much of a boost than the 480 does, even with async enabled. the 1060 still comes out a winner, thanks to the low usage of async in that title, but the driver overhead and the extra boost from async gives amd a nice boost. if the 1060 was clocked down to the 480, they probably would tie. and the 480 would come out ahead in heavier async loads thanks to its dedicated hardware.

hard to believe, but clock for clock, pascal and polaris are very similar in pure gpu core performance, even in dx11 titles. what hurts there is driver overhead on amd's part.

dx12 its a far more even playing field.

>that fury x
yeah, i know. its thanks to its awful tessellation performance when tessellation is used heavily. polaris with gcn 1.4 brought a lot of improvements in tessellation. It's geometry engines got updated dedicated tessellation engines and things like primitive discard acceleration. its now competitive against nvidia in 16x levels. so when you have high levels used in such titles as tomb raider (which is around 32x - 64x) its going to fair better than gcn 1.3 and 1.2 do.

dx12 tomb raider is really dx12 by name. yes it does utilize async, but it really only brought less driver overhead due to the weak usage of async.

>take the 1060 for example. it has a base of 1.5ghz and a boost of 1.7ghz standard. it virtually operates at 1.7ghz 24/7. only drops lower when thermals get crazy.
eh not only that, but most 1060's out of the box boost higher than 1.7ghz. usually in the 1.8ghz range.

but yeah clock for clock, they would be awfully similar.

>tfw 2500k @ 4.4GHz maxes out in GTA V
help ;(

When I upgraded to an i7 I saw massive improvement when playing gta. It's really cpu intensive.

gta v is one of the few titles that very cpu limited. its also part of a handful that can scale up to 10 threads. so even i7's, like the 2600k, receive a nice boost over their non-ht quad counterparts (2600k vs 2500k).

though at 4.4ghz it should still offer decent playable performance.

even for future titles, investing in a used 2600k might be worth while. iirc even ivy processors can be used on z67 with a bios updated so a 3770k would be investing in as well.

>even ivy processors can be used on z67 with a bios updated so a 3770k [would work]
damn. I've been looking at a 2600k, but I may as well grab a 3770k

would a 6600k work or do I need a 6700k

6600k would be plenty

This is embarrassing for AMD, honestly. The CPU shouldn't be affecting performance this badly when it clearly isn't the game itself causing the CPU bottleneck.

That's just GCN, which gets bottlenecked from an Ivy bridge i5 3570k even

That's a man's ass.

Oh baby

6700k + 1070 master race

>not getting the 1080
Poorfag.

>Willfully getting completely cucked when the TI comes out

is this also the same scenario with the nvidia cards getting gimped over time? wherein on new generation of tests, faster cpus are being used thus aleviating the radeons while geforce cards stay the same?

Yep, there's no gimping