1060 vs 480 Benchmarks

Why did the 480 fail so badly? Nvidia just came out with something better in every way.

Other urls found in this thread:

techpowerup.com/reviews/NVIDIA/GeForce_GTX_1060/8.html
techpowerup.com/reviews/NVIDIA/GeForce_GTX_1060/9.html
techpowerup.com/reviews/NVIDIA/GeForce_GTX_1060/10.html
techpowerup.com/reviews/NVIDIA/GeForce_GTX_1060/11.html
techpowerup.com/reviews/NVIDIA/GeForce_GTX_1060/14.html
techpowerup.com/reviews/NVIDIA/GeForce_GTX_1060/15.html
anandtech.com/show/5699/nvidia-geforce-gtx-680-review/3
developer.nvidia.com/dx12-dos-and-donts
overclock.net/t/1606224/various-futuremarks-time-spy-directx-12-benchmark-compromised-less-compute-parallelism-than-doom-aots-also
overclock3d.net/reviews/gpu_displays/rise_of_the_tomb_raider_directx_12_performance_update/5
pcper.com/reviews/Graphics-Cards/3DMark-Time-Spy-Looking-DX12-Asynchronous-Compute-Performance
extremetech.com/gaming/231527-new-doom-update-adds-vulkan-support-amd-claims-substantial-performance-boost
forums.geforce.com/default/topic/951723/geforce-drivers/announcing-geforce-hotfix-driver-368-95/
forums.geforce.com/default/topic/941579/geforce-1000-series/gtx-1080-high-dpc-latency-and-stuttering/
overclock.net/t/1605618/nv-pascal-latency-issues-hotfix-driver-now-available
forums.geforce.com/default/topic/939358/geforce-1000-series/gtx-1080-flickering-issue/
anandtech.com/show/8962/the-directx-12-performance-preview-amd-nvidia-star-swarm/4
twitter.com/AnonBabble

>Nvidia knew about the 480 performance
>fed unrealistic expectations to the public
>/r/amd and /r/ayymd eats it right up
>regurgitated 1000x
>release a product that suspiciously does all and more of the initial 480 hype

...

...

...

Gee, great. That one game and the small handful of games that will use Vulkan I can look forward too, thanks.

What do you expect from a company which is 1bn in debt and have 1/100th of the budget their competitors have. I'm still surprised amd is still alive. They've been failing hard in the cpu division with their utter shite fx processors and their gpu division has also been vastly outmarketed to the point where even if they're competitive, they'll still sell many millions less than nvidia.

>even AdoredTV admitted the rx 480 and gtx 1060 are about equal in dx12 and the 1060 wins in dx11
Literally no reason to buy an rx480 if you have a backlog of any size with dx11 games.

You only get that result if you've got the absolute top of the range i7.

...

...

This. Nvidiots think we don't know about their false flagging

>driver overhead
FUCK OFF

I doubt it, AMDtards will create false hype all on their own

Sadly AMD driver overhead is a real. Just another thing in a long list of reasons not to buy an AMD card.

>X4 955 & I5 750
>literally half a decade old processors
I can't help but think this graph has been built with the single purpose of making one card look better than the other.

>denial

>he really thinks people who will buy a $200/250 card will have a modern i7 or even i5

these are x60 and x80 cards. literal peasant tier cards.

>Why did the 480 fail so badly?

Why do you keep making the same thread and arguing this way from a false premise?
The RX 480 was a huge success for AMD. They sold a ton of cards, a ton of dies to AIB partners, and they sold a huge volume of chips to Apple.

AMD made a cheap die with huge profit margins. It performs well enough for its segment, and manages to hold its own in DX12/Vulkan despite being 15%~ slower than the 1060 in DX11 titles.
Nvidia made a more expensive die, probably with lower profit margins, and they can afford to do that because they have immense market share. Their bread and butter will be GTX 1070 and 1080 sales anyway.

The 480 is pretty awful compared to the 1060. They're the same price, but the 1060 is way faster.

this. i was looking through a bunch of benchmarks and some of them make the 1060 look like it's a whole tier ahead. there would be 3 or 4 cards in between the 1060 and 480 when in reality they should be right next to each other.

By Nvidia's own marketing PR, the GTX 1060 is on average 15% faster than the reference RX 480.
That performance edge just isn't there in a number of DX12 titles, not to mention DOOM when using Vulkan. By TPU's review the RX 480 is only 10% behind the GTX 1060 at 1920x1080, and they're lineup of games includes a bunch of Game Works titles.

You're going to have to try a lot harder if you want to shitpost

>Their bread and butter will be GTX 1070 and 1080 sales anyway.
Not if they stay at that price. Essentially what Nvidia did, was introduce two newer high end cards and give us essentially a GTX 980 for measly 250 bucks. If you're on 1080p (which most people are), a 1070 is overkill and way too expensive. The 1060 is perfect, both performance and price wise. The RX 480 is not a bad card, but it got official meme status, ruining it's reputation for ever and all duo to overly retarded AMD fans and AMD itself. It also is quite a bit slower on DX11 which will stay relevant for quite some time, no matter how much AMD fans wish otherwise.

The only real argument one could use is that the RX 480 seems a bit more ''future proof'' (it's actually a meme). Now hear me out on this one, you are much better off just switching your mid range cards every two years, instead of sitting on a high end card for 3-4. Unless you are extremely poor (and I don't even mean to insult anyone) and can't afford to dish out ~250 bucks every two years, the 480 makes more sense than the 1060.

11%*

are you thick in the head? how am i shitposting? the same website you just posted a benchmark from proves what i just said about how the 1060 looks like it's a whole tier ahead with 3 or 4 cards in between them in some benchmarks.

techpowerup.com/reviews/NVIDIA/GeForce_GTX_1060/8.html
>5 place gap

techpowerup.com/reviews/NVIDIA/GeForce_GTX_1060/9.html
>5 place gap

techpowerup.com/reviews/NVIDIA/GeForce_GTX_1060/10.html
>3 place gap

techpowerup.com/reviews/NVIDIA/GeForce_GTX_1060/11.html
>5place gap

techpowerup.com/reviews/NVIDIA/GeForce_GTX_1060/14.html
>3 place gap

techpowerup.com/reviews/NVIDIA/GeForce_GTX_1060/15.html
>2 place gap

if anything i was being generous. now neck yourself. go and be mad somewhere else.

>counting cards as "places" instead of looking at relative performance
This is either top tier comical bait, or you're legitimately a non neurotypical autism spectrum case.

>relative performance
How exactly is that calculated? Do they take a bunch of benchmarks and calculate a benchmark score on that? Or is it simply what the graphics card is capable off in perfect conditions and 100% hardware power?

They average out the figures in all of their game benches. Their methodology is explained:

>This page shows a combined performance summary of all the tests on previous pages, broken down by resolution.
>Each chart shows the tested card as 100% and every other card's performance as relative to it.

>relative performance

i can't tell if you're legit retarded or just thick in the head. the relative performance chart agrees with me by putting the 1060 a massive 11% ahead which is almost another tier. there is only a ~15% difference between a 980 and 970 at release.

>lacking this much self awareness
The relative performance charts don't agree with any point you tried to make, autist. You're counting the number of cards between the RX 480 and GTX 1060 as if it means anything. It doesn't. The "places" that you're getting so mad about is something that only sticks out in your genetically inferior malformed autism monkey brain. "Places" on a chart is not a real metric.

Exactly as I previously stated:
Nvidia's marketing PR slides indicated that the GTX 1060 was 15% faster than the reference RX 480.
TPU's game average show the RX 480 being 10% behind, and that is including a ton of Nvidia Game Works titles.
The GTX 1060 doesn't have a huge performance lead in DX12/Vulkan.
A 10%~ average lead isn't that big of a deal no matter how you try to twist it

The RX 480 isn't bad by any means. They're both great value oriented mid range GPUs, Nvidia's GTX 1060 is just on average 10%~ faster in DX11 titles.
Stop getting mad over things, autist. Not that I really care. The mentally ill barely qualify as human.

>no async
>no DX12
>no Vulkan
1060 will age like milk

$200 card vs $300 card

They're not even in the same bracket.

I'm a AMD fanboy, but if i find a 1060 for the same 480 price I'm going NVIDIA, sorry poo in the loo

so you really are retarded then. at least we got that cleared up.

i wasn't talking about metrics if you actually read my post. i said "it makes it look" like they are in different tiers. anyone who isn't knowledgeable about any of this will look at these lists and see some huge gap between the two cards and probably won't know they're actually competing against each other until they're told.

>A 10%~ average lead isn't that big of a deal no matter how you try to twist it

no. that's pretty fucking massive, and like i said is almost a tier above. the 980 was 12% faster than the 970, the 390x was 7% faster than a 390 and a 1060 is 11% faster than a 480. (all based on tpu's relative performance charts)

Uhh 1060 is $249 and 480 is $239

Seriously the 1060 is going to sell extremely well. At $250 it's a ridiculously good deal, in fact it is the #1 best value GPU you can buy right now.

I know this is a bait thread, but the cheapest 480 ($200) is 20% cheaper than the cheapest 1060 ($250), but only performs 12% worse.

That's okay by me.

That's the 4GB 480 which performs even worse

Saphire nitro 4gb was performing almost the same as a FE 1060

I'm going to screenshot this in case the 490 is 10% faster than a 1070 so I can say the king of amdrones said a gap of that amount doesn't matter.

>The RX 480 isn't bad by any means.

It's really horrible when you compare it to the 1060. There is literally no reason to buy a 480 when the 1060 exists.

outside burgerland, the 480 reference is more expensive than a custom 1060

480 or 1060 to pair with my i5 4670k?, I'm hearing the newer amd cards only like newer cpus

>10%~ average lead isn't that big of a deal

1060 obviously. aib 1060's are the same prices as reference 480s and fuck reference cards.

I want that Sapphire RX480 so bad, gonna OC that shit to 1400 and have better performance than a 300$ AIB 1060 in 90% of games (it beats a 2100MHz OCed AIB 1060 in almost every bench)...

that's nuts. Sign me up, only 280$

Sad thing is the 1060 has absolutely no OC potential because clock speed doesn't even remotely help improve performance.... 100mhz is like 0.5% performance boost after 1500Mhz

>me
>fanboy
>plainly stating the GTX 1060 is a great value oriented GPU with an edge in DX11

Sure, but a performance margin of that size really doesn't matter.
10% turns 30FPS into 33fps, 60fps into 66fps, 120fps into 132fps. Its not enough of a performance difference to afford you any more AA, post processing, or higher shadow quality without losing frame rate. Realistically it doesn't matter. Things like frame time variation matter a lot more, and if you wanted to argue that Nvidia was better there on average you might have something.

It really isn't.

>10% oc to base
>13% performance increase in bf3

1060 is way better

is that bf4 benchmark only with dx11? i wonder how well it does with mantle.

>300$ AIB 1060
>Sad thing is the 1060 has absolutely no OC potential because clock speed doesn't even remotely help improve performance.... 100mhz is like 0.5% performance boost after 1500Mhz

AMDrones are so pathetic these days.

yes we need more reviewers doing benchmark comparisons of NVIDIA and AMD using Mantle in BF4!

are you really this retarded?

>Why did the 480 fail so badly?

Because Nvidia has 90% of tech review sites in their pockets.

>it's all just a big conspiracy

Gotta love this one, add it to the list of AMDrone excuses.

You know, out of all the things they yell about I am surprised no one has brought up that NVIDIA did kill 3/4 way sli for games and that the enthusiasts key only works for benchmarks and other software.

Well, if you don't have a top of the line i7, one of those is better. The only person with a high end i7 I know has a Gtx 1080.

AMDfags can't grasp that people going for budget build will either have old ass cpu or will buy i3. Hilariously even the shitty amd cpus are getting more fps with nvidia cards.

the reason why nvidia had stronger dx11 drivers was because they were multi-threaded which helped lower driver overhead since it could be spread out across multiple threads.

but another reason was because of their use of a software scheduler.

one of the reasons why fermi ran so hot was because it utilized a hardware scheduler, just like all amd gcn based cards do. hardware scheduling draws a lot of power and more power means more heat. why did they use a hardware scheduler? a hardware scheduler will always be faster than a software one. less overhead, and the gpu can do it much faster than software.

the problem with a hardware scheduler? once built, you cannot modify it. you have to build a whole new card if you update the hardware scheduler.

but nvidia wanting to move on from their house fire fermi's decided to remove hardware based scheduling with keplar and beyond. this is the main reason why keplar used far less power and ran cooler than fermi. nvidia realized with dx11, you didn't need a complex hardware scheduler. most of the scheduler went under utilized and was overkill. with dx11 multi-threading capabilities, and making their drivers multi-threaded, it help alleviate a lot of the driver overhead one would endure with utilizing a software scheduler. in turn this gave them the opportunity to now have more control over scheduling. able to fine tune the drivers for individual games. well, they had to. this caused a lot of work on nvidia's driver team, but it helped them max out every ounce of juice they can get from their cards and lower power and reduce heat.

maxwell continued this by removing more hardware based scheduling.

the problem? dx12 and vulkan need a hardware scheduler to be taken full advantage of. you need it for the complex computations of async and to manage compute + graphic operations at the same time. they're complex, and you need the performance.

going to be useful in graphics and games.

here's a nice article from keplar's launch done by anandtech:
>anandtech.com/show/5699/nvidia-geforce-gtx-680-review/3

>GF114, owing to its heritage as a compute GPU, had a rather complex scheduler. Fermi GPUs not only did basic scheduling in hardware such as register scoreboarding (keeping track of warps waiting on memory accesses and other long latency operations) and choosing the next warp from the pool to execute, but Fermi was also responsible for scheduling instructions within the warps themselves. While hardware scheduling of this nature is not difficult, it is relatively expensive on both a power and area efficiency basis as it requires implementing a complex hardware block to do dependency checking and prevent other types of data hazards. And since GK104 was to have 32 of these complex hardware schedulers, the scheduling system was reevaluated based on area and power efficiency, and eventually stripped down.

>The end result is an interesting one, if only because by conventional standards it’s going in reverse. With GK104 NVIDIA is going back to static scheduling. Traditionally, processors have started with static scheduling and then moved to hardware scheduling as both software and hardware complexity has increased. Hardware instruction scheduling allows the processor to schedule instructions in the most efficient manner in real time as conditions permit, as opposed to strictly following the order of the code itself regardless of the code’s efficiency. This in turn improves the performance of the processor.

>Ultimately it remains to be seen just what the impact of this move will be. Hardware scheduling makes all the sense in the world for complex compute applications, which is a big reason why Fermi had hardware scheduling in the first place, and for that matter why AMD moved to hardware scheduling with GCN. At the same time however when it comes to graphics workloads even complex shader programs are simple relative to complex compute applications, so it’s not at all clear that this will have a significant impact on graphics performance, and indeed if it did have a significant impact on graphics performance we can’t imagine NVIDIA would go this way.

>What is clear at this time though is that NVIDIA is pitching GTX 680 specifically for consumer graphics while downplaying compute, which says a lot right there. Given their call for efficiency and how some of Fermi’s compute capabilities were already stripped for GF114, this does read like an attempt to further strip compute capabilities from their consumer GPUs in order to boost efficiency. Amusingly, whereas AMD seems to have moved closer to Fermi with GCN by adding compute performance, NVIDIA seems to have moved closer to Cayman with Kepler by taking it away.

important part here:
>NVIDIA is pitching GTX 680 specifically for consumer graphics while downplaying compute
>downplaying compute

it's also why in nvidia's "dx12, does and don'ts" they state not to run to many compute + graphic operations at the same time.
>developer.nvidia.com/dx12-dos-and-donts

>muh shill paste
AMD has overhead even in vulkan.

their hardware cannot handle it. while amd's gcn not only can, but shines brighter when its under heavy async load.

here's some more interesting reads on nvidia's async debacle:
>overclock.net/t/1606224/various-futuremarks-time-spy-directx-12-benchmark-compromised-less-compute-parallelism-than-doom-aots-also

yes its mostly focused on the time spy issue regarding their usage of async, but it does dwell into nvidia's architecture limitations.

also the use of the hardware scheduler is why amd gpu's used more power and ran hotter than nvidia's since the keplar and gcn 1 days. if nvidia slapped a hardware scheduler on pascal, their gpu's would not just use as much, but most likely use more than amd's since nvidia is on 16nm instead of 14nm like amd.

>In the previous pages, we compared the performance of Rise of the Tomb Raider's original Direct 12 patch and the performance of the game's newest DirectX 12 implementations, seeing a higher minimum framerate performance in the majority of cases and improved performance in all cases for AMD's R9 Fury X GPU.

>Now we will compare the DirectX 12 and DirectX 11 versions of the game with this new patch, as while the DirectX 12 version has improved we need to know if this new version actually provides users better performance than what we can achieve with the older DirectX 11 API.

>With AMD's R9 Fury X we see a performance improvement when using DirectX 12 in all cases, whereas Nvidia's GTX 980Ti actually sees a performance decrease in all cases except 1080p performance, where we expect that the CPU performance benefits of DirectX 12 may have had more of a benefit than any potential gains in GPU performance.

>All in all it seems that those with AMD GCN 1.1 or newer GPUs will be better off playing Rise of the Tomb Raider in DirectX 12 whereas Nvidia users are better off using DirectX 11.

>overclock3d.net/reviews/gpu_displays/rise_of_the_tomb_raider_directx_12_performance_update/5

whats important to note is that rise of the tomb raider is a nvidia sponsored, and nvidia gameworks title. so yes, the 980 ti did come out ahead at 1080p, and can argue hurr dx12 don't matter, what's important to note how nvidia didn't benefit from dx12 at all, and in higher resolutions, suffered regressions.

The 480 was O V E R H Y P E D

period

>pcper.com/reviews/Graphics-Cards/3DMark-Time-Spy-Looking-DX12-Asynchronous-Compute-Performance

when we take a look at time spy we can see some pretty interesting results.

when we look at the total % increase with async on & off one thing is made clear, amd wins hands down. even the humble $200 480 nets a higher increase in performance with async on than the 1080. maxwell flat out did not receive a boost at all.

there's a reason for that. according to pcper:
>Now, let’s talk about the bad news: Maxwell. Performance on 3DMark Time Spy with the GTX 980 and GTX 970 are basically unchanged with asynchronous compute enabled or disabled, telling us that the technology isn’t being integrated. In my discussion with NVIDIA about this topic, I was told that async compute support isn’t enabled at the driver level for Maxwell hardware, and that it would require both the driver and the game engine to be coded for that capability specifically.

which shouldn't come to a surprise, maxwell can't truly do async at all. its terrible at switching back and forth between compute and graphics as noted above. pascal does bring some improvements with this regard but there is more to the story.

the problem with time spy is that it doesn't fully take advantage with async. their designed async in that benchmark to the way nvidia stated in their "dx12 do's and don'ts."
>Try to aim at a reasonable number of command lists in the range of 15-30 or below. Try to bundle those CLs into 5-10 ExecuteCommandLists() calls per frame.

as noted above with the overclock.net link, time spy doesn't fully utilize async. it doesn't use a lot of it. it also doesn't use a lot of parallelism, meaning its not throwing out a lot of compute & and graphics operations at the same time. it feeds it mostly compute, sending a few compute operations at once, then switches to a little graphics, then back to compute. it does it in a way that doesn't over saturate pascal's dynamic preemption.

when we look at picture related with doom - vulkan you might notice something, the 1060 wins with weaker processors, but loses to the 480 with stronger processors.

with the way gcn works, it scales more with a stronger processor than a weaker one.

in doom - vulkan, async is enabled on amd cards and async is used HEAVILY in doom - vulkan. the older cpu's cannot feed the ace's and cu's fast enough. you still get a boost, but not as big. slap in a 6700k and, well in doom, it turns that $200 card into a $400 one. its able to keep up with the 480 and feed it plenty.

nvidia on the other hand doesn't have async enabled. id disabled it since it gives nvidia cards a regression and they're waiting for nvidia to release a driver to reenable async for nvidia cards so the only benefit nvidia is getting is the general less driver overhead. which is why nvidia gets a bigger boost with older cpus and not newer, stronger cpus. the older ones cannot keep up with the driver overhead, so switching over to dx12 frees up a lot of resources for older cpu's while the 6700k is strong enough that it doesn't matter so nvidia see's less of a boost.

thats why you'll notice the stronger the processor becomes, the less of a boost the 1060 receives, and the higher the boost the 480 starts to receive.

gcn is built to be fed, and utilize async. the more you feed it, the more powerful it becomes. give it a ton of things to do and it shine. vulkan / dx12 will always give amd a boost but the stronger the cpu, the more boost you'll get.

if you're building a pc now a simple 6100 is more than enough for a 480. if you're on a first generation i7, it be best to upgrade. regardless if its amd or nvidia. if you're on 2600k sandy, ivy, or even haswell, you'll fine and don't need to upgrade. you will see a stronger boost with the 480 than the 1060 in this title.

what i love about this one is that it shows these older processors bottlenecking the $200 480.

also to note one of the reasons why nvidia still had a lead was due to not to much usage of async and high levels of tessellation.

dx12, and newer renditions of gcn have improved tessellation for amd, but nvidia still maintains a lead here.

maxwell simply cannot get into async at all. its preemption is terrible.

its why nvidia has kept their mouth shut with maxwell on async.

pascal does improve with dynamic load balancing preemption, but its a bandaid.

>buy high end i7 with our poo in loo 480
kys

also a way to help people understand the difference between amd's gcn architecture and nvidia's keplar and above architectures like pascal work is to think of it as a dual core (amd gcn) vs single core (nvidia keplar+maxwell) and with pascal, single core + hyperthreading.

its like using a dual core but running single threaded games on it. one core sat there going unused. the dual core is going under utilized. this was the case with dx11 titles.

with dx12 & vulkan, both cores can finally have the opportunity to be used if developers utilize it. tapping that second cores unleashes a ton of extra performance. nvidia is stuck on a single core design. pascal has hyper-theading but its no where close to the performance of a true dual core.

it might not be the best technical way to describe it, but gives you a rough idea.

pascal cannot handle to many ques of compute + graphics. it also can't handle that much single async of either compute or graphics. if we go back to time spy, again, if futuremark put in more async pascal would come to a crawl.

gcn does have a ceiling, but its extremely high compared to pascal.

ignored
>if you're building a pc now a simple 6100 is more than enough for a 480. if you're on a first generation i7, it be best to upgrade. regardless if its amd or nvidia. if you're on 2600k sandy, ivy, or even haswell, you'll fine and don't need to upgrade. you will see a stronger boost with the 480 than the 1060 in this title.

didn't know i3 was highend i7. let alone i5 4670k was high end i7.

or I can run my old ass cpu with 1060 and get better perforamnce
kys

This is probably the biggest lowlife I've ever witnessed on Sup Forums. You've been spamming shit same shit for days now and each time it's gets longer and longer. You're saying shit people already know about.

now i want to talk about ashes dx12 with the 480 vs the 1060.

its sorta the elephant in the room.

the 480 and the 1060 are neck to neck. the 480 appears to come out in the lead, but barely. 1 - 4fps. why is this?

well one thing to note is that ashes uses async HEAVILY, but more on the compute side than compute and graphics. due to this, part of gcn goes under utilized. the 1060 does receive a boost, all pascal cards do in ashes, but not that much. about 5ish percent overall.

gcn still handles compute only async extremely well. the nano fury for example nets a 12% boost with async in ashes.

so whats going on the with the 480?

well the 480 only has 40 compute engines total (36 cu's and 4 aces) while a card like the fury nano has 64 total. so being primary heavy on the compute side, the 480 will get a boost, but not as great as if it was able to handle graphics into the mix as well.

the 1060 gets a minor boost because it doesn't have to switch between graphics and compute at the same time much.

but, there could be something else wrong.

as i was reading extremetech's article on doom and vulkan, i came across this bit:

>extremetech.com/gaming/231527-new-doom-update-adds-vulkan-support-amd-claims-substantial-performance-boost
>The RX 480 is just one GPU, and we’ve already discussed how different cards can see very different levels of performance improvement depending on the game in question — the R9 Nano picks up 12% additional performance from enabling versus disabling async compute in Ashes of the Singularity, whereas the RX 480 only sees a 3% performance uplift from the same feature.
>the R9 Nano picks up 12% additional performance from enabling versus disabling async compute in Ashes of the Singularity, whereas the RX 480 only sees a 3% performance uplift from the same feature.
>R9 Nano picks up 12%
>RX 480 only sees a 3%

something else is going on because even with lesser compute engines total, the 480 should be receiving a higher boost. 3% is incredibly low for gcn.

Congratulations, you've spotted the payed shill.

It's noon in india. Pajeet is at work.

yup im a paid shill.

look at the card amd gave me.

That just proves you're a paid one and not stupid to use amd hardware. Much like how AMD uses intel cpus for their internal tests.

>hurr you still bought a 1080 gg
why shill for amd but own a 1080 you say?

i'm a dissatisfied customer. not only am i enduring this issue that ALL pascal cards face. yes, even the 160:

>forums.geforce.com/default/topic/951723/geforce-drivers/announcing-geforce-hotfix-driver-368-95/
>forums.geforce.com/default/topic/941579/geforce-1000-series/gtx-1080-high-dpc-latency-and-stuttering/
>overclock.net/t/1605618/nv-pascal-latency-issues-hotfix-driver-now-available

but also THIS issues that's been going on since the launch of pascal, two months now, and NVIDIA STILL CAN'T FIX IT.
>forums.geforce.com/default/topic/939358/geforce-1000-series/gtx-1080-flickering-issue/

i can't run to much stuff in the background or i start getting "lag" or audio distortions, i have to set my monitor to 60hz then back to 144hz on every boot up and keep my gpu running in high performance mode to help stop the flickering, and turn off a ton of 3d accelerated stuff like fancy desktop effects and 3d acceleration in my browser.

what's more sad is the hotfix DIDN'T FIX ANYTHING for the latency issue.

after reading around ocn and looking into the async stuff i'm just over nvidia.

which is also why i'm selling my 1080 and picking up a 480 until vega comes out.

nvidia quaility has gone down the shitter and they had no vision. they downplayed compute. they didn't take the time to think about the future and just focused on dx11 and singular instead being innovative and trying something new.

oh and their "muh efficiency" is total horse (poo) because all they did to get that WAS TO REMOVE THE HARDWARE SCHEDULER because they wanted dx11 to stay the api of choice for the next ten years.

if the 1060 had a hardware scheduler it would probably have 5 watts more (155) tdp than the 480 (150) since its on 16nm instead of 14nm.

>i'm selling my 1080 and picking up a 480 until vega comes out
There's a limit to bullshit you know.

its not bullshit

i would keep the 1080 till vega comes out but can you deal with FLICKERING EVEN IN THE DESKTOP every single day? can't even stream netflix to my tv and play a game without enduring audio cracking nonsense.

>hurr i have pascal not on my system
thats great, but you're still affected by it. run latencymon, nvidia confirmed it themselves everyone is affected by it. but it takes a certain amount of stuff running to get latency high enough. on some systems like mine netflix and a game is enough, others needs netflix, a game, twitch, and streaming something.

>tfw amdrones are trying to copy me

Damn I didn't realise I hit a nerve.

Nvidia waited for the 480 then blew it the fuck out by pricing the 1060 to match. Gotta say thanks Raj

1060 is a full performance tier above the 480 once you OC it. it's closing in on 980ti/1070 performance. shame that pascal doesn't OC even higher, maybe volta will improve things on that front.

maxwell was also the same way. cards like 970 and 980 could punch well above their weight compared to the stock speeds.

M8 they're all assmad that I keep reminding them that amd are shit and that as an amd owner I'd switch it for nvidia any day. They're trying to fight back with falseflagging but it's failing spectacularly.

you hit a nerve because this is why nvidia gets away with this nonsense. people downplay their issues it and look the other way.

>pic related
i upgraded to the 1080 from dual 980's

this entire year nvidia has been plagued by awful driver releases. i've used nvidia since 2004 with my fx5600. i had a few amd cards over the years, a 4850 and a 6950 and never had issues with them. but i never had issues with nvidia either. but always kept to nvidia because the entire industry always rat and raved about them.

but since windows 10 their quality has gone down the shit. from sli regressions to improvements, then random regressions again, to the black screens i kept getting back in january with the january driver.

and now this nonsense with the flickering and latency.

then most of all, i listened to the people that kept defending that maxwell can do async. now with time spy it CLEARLY shows maxwell cannot do async. it took them nearly a month to respond about the flickering, and similar amount of time about the latency. from nvidia lies, half truths, 970 3.5gb gate, and honestly, worse, silence about the issues. absolute silence. its pulling tooth and nails to get them to respond.

they say amd has driver issues, but as of late with nvidia, amd can't be worse.

but at least amd has been more open over the past few years and their cards can actually do async.

and they have been more innovative than nvidia. if it wasn't for them, we wouldn't even have dx12 and vulkan.

>i7-6700
>absolute top of the line i7
Wew lad it's like broadwell e is nonexistent lemme go trade my i7-6950 for a 6700

>pic related

Atleast try and make it convincing like that Romanian guy or wherever he was from.

>having access to high end hardware =/= owning it

single threaded performance m8

it still matters.

plus, dx12 scaling, more than 4 cores doesn't necessary add much improvement.

>anandtech.com/show/8962/the-directx-12-performance-preview-amd-nvidia-star-swarm/4

there are a few titles where 6 cores gain a boost, but then you run into another wall, where 8 cores+ doesn't continue to increase.

10% at 60fps being 6fps typical discrepancy is relatively negligible
If it were at 144fps it would make more sense to consider it a big deal, but neither card can run 144Hz 1080p ultra minimum and neither can run that at 1440p ultra as a minimum.
Nothing wrong with the cards, they're just not that big of a deal. Waiting for vega to compare price wise to 1080 or 1080 Ti when it releases and if it lacks HBM2 or h/w level asynchronous compute I'll likely go AMD if they rock HBM2

its all true m8.

...

You could have taken these pictures out of a guts thread, they don't prove shit.

some more. took these before i sold them.

All AMD GPUs get better performance in DX12 from newer CPUs due to the CPU being able to feed more data to the GPU for asynchronous compute etc.
NVidia cards get less boost from CPU due to the lack of h/w level asynchronous compute

nope there all mine

kek, oh so you sold them. Oh well that obviously explains why you don't have them anymore! Say, does your uncle work at Nvidia by any chance?

>taking 50 different pics of your hardware

This is exactly how I know you don't own this shit. No one (right in the head at least) takes a mass of pictures of their 980 and shit. You've probably downloaded these pics from neogaf or some shit.

There's falseflagging then theres trying way to hard. I'll give you a 2/10 for the effort though.