Why does the GTX 1060 have the same performance as the Rx 480 and even beat it in some games outside of DX12?

Why does the GTX 1060 have the same performance as the Rx 480 and even beat it in some games outside of DX12?

On paper it would appear that that 8GB Rx 480 would shit all over the 6GB GTX 1060 every time.
>~35% faster texture fillrate
>33% more vRAM
>33% faster vRAM bandwidth
>33% more vRAM bus width
>~35% more FP32 performance
>~270% more FP64 performance
>~860% more FP16 performance (probably wrong)

The only thing a GTX 1060 does better in is Pixel fillrate (~200% better). What gives? Is this Nvidia just bribing game companies like always or have they made a leap in GPU technology nobody talks about?

>Source:
en.wikipedia.org/wiki/AMD_Radeon_400_series#Chipset_table
en.wikipedia.org/wiki/GeForce_10_series#GeForce_10_.2810xx.29_series

DELETE THIS IMMEDIATELY!!!

>can't read

Whatever the reason the rx 480 became insanely popular when it hit retail, mostly because of filthy miners.

Because people aren't kidding when they say amd doesn't know how to make drivers.

The bump amd gets with dx 12 is literally what happens when you take the need for optimization away from their hands and deploy it to something actually competent instead.

>Not using asci miners

In a perfect world where drivers were always perfect could graphics cards simply be ranked by FP32 performance?

Nvidia superior architecture & efficiency always wins again AYYMD MOAR CORES

Give it a few more months. AMD will optimize the shit out of the drivers. Next thing you know the RX480 is within 5-7% of a 1070 performance.

The Rx 480 is pretty 1337 in efficiency for mining which is why they're so popular.

[4] Rx 480 cards > asci miners with similar hasrate (~100 mh/s).

Though in the end mining is now a huge meme unless you have a farm of at least 1000 Rx 480 GPUs.

But isnt Asci just cheaper m8

They also use more electricity which is why they're so unpopular for bitcoin mining.

Efficiency > raw power in bitcoin mining.

This.

Just look at the 290X now.

Because Nvidia uses a bunch of proprietary extensions that are faster than the default ones.

Instead of random_opengl_function you use NV_random_opengl_function and its magically faster.

This isn't possible

Won't happen, Nvidia basically throws its scraps to khronos and keeps all the optimized fast proprietary shit for itself.

Because L2 cache, rop performance, command processing, scheduling, driver optimization and more isn't in that chart.

>Because Nvidia uses a bunch of proprietary extensions that are faster than the default ones.
>Instead of random_opengl_function you use NV_random_opengl_function and its magically faster.
Why does the magic go away in DX12 anyway?

Thank you OP. I was leaning towards getting an rx 480 but now it seems the 1060 is the better choice. Looks like I dodged a bullet.

Source?

DX12 is proprietary, better optimized, and fairly new so there isn't as many NV specific optimizations.

you guys know vram doesn't magically translate into a faster game unless you're running ginormous texture mods or something right?

He hasn't even provided any proof that it performs better than the rx480

It does else APUs stuck with 25.6-34.1 GB/s vram wouldn't suck so much.

Explain how the 780Ti and Titan Black are getting blown out of the water by the 290? Both cards came out performing better than the 290X but after the years they get crushed by the 290. Or the 780 getting crushed by the 280X/380X.

The same will happen with the FuryX and the 980Ti

You answered yourself, games are mostly pixel fillrate dependant and that relies on ROPs, Nvidia's designs get more ROPs than AMDs but trade pretty much everything else from shader, texel throughput and bandwidth

When an engines takes advantage of shader/texel throughput you an 'AMD' sponsored game

You yelly of my zec stash?

>Nvidiot shilling
DX12 is primarily CPU based improvement and a toolset for developers better optimize how GPUs are utilized. It's literally trying to take hardware drivers out of the equation as much as possible.

If Nvidia gets anything out of it is that they won't have to waste as much time optimizing their drivers for CPU overhead much, and focus elsewhere.

It's called drivers

No hardware can save you from bad drivers.

Sup Forums meme never wrong
Its because amd gpus have shit drivers

it's not that the AMD cards got better
it's that Nvidia cards got gimped

>M-MUH BIGGER NUMBERS
And the 1060 still beats the shit out of the 480 in most games. Literally who cares, it's like comparing memory amount on iOS vs Android

You're the stupid tripfag who believed that AMD joined Khronos in 2013, you shouldn't talk about anything regarding OpenGL or Khronos ever
The Fury X already shits on the 980Ti senpai
Except they don't, most developers don't target proprietary OpenGL extensions since it defeats the whole purpose of an API
Also, proprietary extensions hardly ever overlap with the Khrono's ones you fucking retarded tripfag, they pretty much always add stuff, I haven't seen an extension that overlaps with Khrono's stuff since the early 2000's
Vulkan is not proprietary, and the 'magic' goes away there as well

>JUST WAIT

Nvidia gimping meme has been proven wrong for at least a year now.

AMD makes superior cards out of the gate and just takes a while to get drivers up to par.

Performance now: Nvidia
More performance later: AMD

With Nvidia wysiwyg. Your cards just won't get any more powerful over the years. 10% is pretty nominal these days for their chips over a 2 year period.

With AMD you'll see 10% in the first couple months, then a dropoff to like 3-4% for a year, then another 15% or something crazy the next year.

>Nvidia gimping meme has been proven wrong for at least a year now.
[citation needed]

Two major reasons:

AMD loves to fill their GPUs with general purpose CU blocks (ALUs and TMUs) and skimp on fixed function blocks (rasterizers, ROPs, and geometry/tessellation).

Nvidia has a massive software budget that they use to encourage devs to tailor towards their hardware (lots of geometry, simpler shaders) and to rewrite shaders with "visually indistinguishable" optimized replacements.

Nvidia is able to do more with less (with the benefit of also requiring less power), but it's also the reason their cards are alleged to age like milk, since they're entirely dependent on shader rewrites.

>Not mining an ASIC resistant coin
>Spelling ASIC wrong

>The Fury X already shits on the 980Ti senpai

that's highly game dependent.
it had a pretty unbalanced resource allocation.

What I've been saying for a while, AMD cheaps out on the ROPs pretty often, even during the Fermi vs Evergreen Nvidia still had a higher ROP count.

Or maybe they believe they can't feed more ROPs through the pipeline

RBEs/ROPs are only half the picture.
Fiji would have done much better with 8 SEs with 2 RBEs/8 ROPs each even if it had to drop to 56 CUs/3584 ALUs.
Going to a full 96 ROPs additionally would have been good too though of course.

In games, but it would suffer in every other workload, and I can't imagine AMD has the money for 2 different Fiji dies one with high ROP count for consumers and one with a small number for number crunching.

it means that the rx480 will get better with driver updates whereas the 1060 is already maxing it's potential

wrong

Do you reckon the 1060 6gb could handle a 4k tv for gaymen? If not, then I could put it in 1080p mode and still view video on my PC in 4k, no?

It could, at 30FPS with everything set to medium-low

1060 has no problems with 1080p

Brilliant, seems to be the perfect card for me in that and every other aspect then.

I still have a bit of reading to do before I'm 100% sold though, and I have to make sure my build is compatible with it.

Bit more handholding if that's alright:

What's a logical approach to finding out if Part X is compatible with the rest of a build?

Buy the 1060 now or wait for black Friday?

Obviously adding anything will come at the expense of cutting something else when you're already near reticle limit, but it isn't as if ROPs are taking up the majority of the GPUs already.

A 6*(10 CU + 4 RBE) or 8*(7 CU + 3 RBE) chip would have trounced the Titan X in a lot more games while still having vastly higher fp32 crunching power than anything Nvidia had at the time.

Interesting

AMD is popular among miners again

fucking epic unironically

>hey guys, the GTX 1060 is a beast compared to the 480, even though the 480 is theoretically better
>The only thing a GTX 1060 does better is
Did you mess something up?

AMD's ROPs are simply more powerful than nvidia ones. They offer more fillrate for less. You are just uninformed.

Dem drivers mang...

sauce?

If you look at the texture fillrate, that is the ROPs.

But that's completely wrong, you retard.
Texture sampling rate in GCN comes from the TMUs embedded in CU blocks.
Pixel fill rate, both color and z/depth, are what the ROPs handle.

GCN's pixel fill rates are nothing special, and both AMD and Nvidia get essentially 100% efficiency in synthetic benchmarks:

Nvidia owner here I can confirm

What does a ROP do? Does it rasterize triangles?

In the GCN pipeline, the per-CU rasterizers take triangles as input, do early depth testing, and "write out" 4x4 pixel blocks, which really just means initializing pixel shader threads.

The color/stencil ROPs are part of the Render Back Ends that do final depth/mask testing, MSAA/SSAA sample management and blending, and sample caching and coalesced write-out to memory.

>the per-CU rasterizers

shit, meant per-SE

and GCN RBE overview for contrast...

Because the 480 only has 32 ROPs, which is a heavy bottleneck to its performance. For comparison, the 1060 has 48 and the 2/390(X) has 64. The fact that the 480 hangs with those cards suggests that AMD's architecture is pretty good, and will be capable of some serious power once the stop crippling it at the front end to meet strict yield and price targets.

This thread is great for spotting the retards from Sup Forums and their drivers meme, incidentally.

Different game engine styles use different blends of arithmetic, texture sampling, geometry processing, etc., and AMD's balance isn't crippled, it's just different from what Nvidia pushes with its TWIMTBP program.

Maxwell (plus TWIMTPB) was basically designed to bottleneck every other architecture by flooding scenes with tons of needlessly tiny triangles, and it's not really "gimping" to not just try to play catch-up in that regard.

AMD clearly wants to encourage future games to just rely on generic shaders as much as possible and keep geometry and pixel throughput lower. It's clearly motivated by wanting to maximize overlap between gaming and GPGPU segments, but it's not directly about gimping or cost cutting per se.

People are still mining with GPUs on scrypt coins I thought ASIC's came out years ago for them??

not every (((crypto)))coin currency is based on the exact same SHA-whatever mining.

>Muh Fury X 4096bit width
Nvidia supports DX12_1
AMD supports only DX12_0 and 11_1
Obviously AMD making obsolete hardware

Even Intel supports DX12_1

Only poorfags will defend this

I'd like to know too. Likewise, the fury x should be just short of a 1080 on paper, but actually it's notably worse than a 1070 except in vulkan edge-cases (i.e. not even in the average vulkan case).

Fiji had a lot of problems.

> bottlenecked in geometry and rasterization throughput
> only had 2 MB L2 cache
> could only reach 65% theoretical memory bandwidth compared to 75%-80% for typical GDDR5
> not very good Delta Color Compression

literally all it's good at is arithmetic and texture sampling.