1060 Vs 480 DX12 Benchmarks With Async Leaked - 1060 Is Faster

wccftech.com/nvidia-gtx-1060-leaked-benchmarks/

videocardz.com/62278/nvidia-geforce-gtx-1060-rumors-part-7-new-cards-more-benchmarks

Other urls found in this thread:

videocardz.com/62278/nvidia-geforce-gtx-1060-rumors-part-7-new-cards-more-benchmarks
anandtech.com/show/10486/futuremark-releases-3dmark-time-spy-directx12-benchmark
stackoverflow.com/questions/1050222/concurrency-vs-parallelism-what-is-the-difference
hardwarecanucks.com/forum/hardware-canucks-reviews/72889-radeon-rx480-8gb-performance-review-19.html
guru3d.com/articles-pages/amd-radeon-r9-rx-480-8gb-review,10.html
steamcommunity.com/app/223850/discussions/0/366298942110944664/
twitter.com/NSFWRedditVideo

...

stop spamming indian street forums

...

...

>currytech

Hitman
Rise of the Tomb Raider
Just Cause 3
Black Ops 3
Doom 4

>10% better performance
>20% higher price
>by the time it is out, you'll be able to buy factory overclocked RX480 which matches the 1060 while being more silent and costing the same as REFERENCE 1060

>wccftech

lol

fuck off

>nvidia architecture doesn't gain ANY real performance with async compute on games
>somehow on 3dmark dx12 test AMD and Pascal have basically the same gains
I wonder who is behind this...

>implying you can't oc pascal
>implying the 480 14nm isn't a fail

videocardz.com/62278/nvidia-geforce-gtx-1060-rumors-part-7-new-cards-more-benchmarks

DELETE THIS

Its the same company behind extreme tessllation

Watch out, the AMD defense force is not going to like this.

does it really suprise anybody that the company who spent a fuck load improving dx11 to almost dx12 level performance sees almost no performance improvement going to dx12?

RIP AMD

Quads confirmed for truth

I know wccftech are a laughing stock for leaks, but how are videocardz?

In the past they've been shit, but for most of the Nvidia cards this time they've been pretty much spot on.

except the 480 has terrible OC headroom and tops out at 1350mhz, which is less than 100mhz from the stock core clock.

even a 1550mhz 970 beats a 1350mhz 480 lmao

◕ͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥ OFF THE CHARTS

>>>◕ͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥ NVIDIA
>>◕ͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥ AMD
>◕ͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥͥ

>Using 3Dmark Time Lie when Doom clearly shows it to be false.

Time spy doesn't actually use asynchronous compute, it uses concurrency which is different.


Read the 1080 white papers you idiits

'I should note that the primary performance benefit as implemented in Time Spy is via concurrency, so everything here is dependent on a game having additional work to submit and a GPU having execution bubbles to fill.'

anandtech.com/show/10486/futuremark-releases-3dmark-time-spy-directx12-benchmark

Time Spy does not use parallelism for async compute thus it does not support AMD's async compute proper. It is heavily biased towards Nvidia's preemption etc

Basically you can't use Time Spy to compare AMD to Nvidia on a level playing field. You would need it to change the algorithm for each so that Nvidia gets the best score possible using it's architecture and change it to parallel to get the best from AMD. Just using one type of system for both skews the results.

PLS

how much will 1060 cost?

>the 480 has terrible OC headroom and tops out at 1350mhz

Funny that the PowerColor Devil card is shipping at 1367MHz out of the box then. Boy are they going to get a lot of returns when people realise that the card doesn't work at all!

>DX12 Benchmarks
Irrelevant.

AMD is not crying they are laughing because Vulkan and DX12 in games clearly shows that 3dmark is not telling us the truth. Otherwise we would not be seeing such significant gains in Doom. RotTR, Hitman, AotS etc.

It is annoying that 3dmark gimped Time Spy though. I shall be posting on their forums asking why this is the case once the site is back up (conveniently it's down right now. It's almost like they are deliberately avoiding the backlash).

>Time Spy does not use parallelism for async compute thus it does not support AMD's async compute proper.

you clearly don't know the difference between parallelism and concurrency

stackoverflow.com/questions/1050222/concurrency-vs-parallelism-what-is-the-difference

AMD has true concurrency (SMs can be utilized for general compute work while graphics is bottlenecked by ROPs/TMUs/other fixed function hardware), NVIDIA only preempts between the two which leaves some parts of the GPU idle.

250 MSRP for for cards by 3rd parties
300$ for FE.

So almost tree fiddy for the AIB cards after nvidia tax.

>is not telling us the truth

those don't look real to me. dx 12 rise of the tomb raider, for example

hardwarecanucks.com/forum/hardware-canucks-reviews/72889-radeon-rx480-8gb-performance-review-19.html

I don't get it

your image says 41.4 fps for the RX 480 in rise of the tomb raider at 1080p but that's not what other benchmarks show. just one example.

Because ALL reviwers have the same setup or preset?....

You can but its a waste of time.

If 3Dmark scores are telling the truth then why does Doom show such huge gains for AMD? Why does RotTR show big gains for AMD? Why does Hitman show such big gains for AMD? Why does AotS show such big gains for AMD? Why does Warhammer show such big gains for AMD? Also.

im just saying that i've never seen a benchmark with scores as low as the one nvidia posted. i don't know how they got numbers that low for the 480

guru3d.com/articles-pages/amd-radeon-r9-rx-480-8gb-review,10.html

>Galax
>Nvidia
Need I say more.

>mahigan

If Sup Forums was half as educated as that guy is about gpu's this board would be a much better place.

THE GOYIM KNOW
SHUT IT DOWN

The other pic is newer than the 480 review - tomb raider's patch 7 came out a few days later and a few sites (like hardware canucks and oc3d) retested. Guru3d did not.

>implying you can't oc pascal

But that's the thing, you can't.

>except the 480 has terrible OC headroom and tops out at 1350mhz, which is less than 100mhz from the stock core clock.

We've been over this. The card come with insufficient power distribution and the cheapest possible cooler.
AIB models get a much beefier cooler and 8pin or dual 6-pin power, so they'll be able to squeeze out way better overclocks. Even the biggest anti-AMD shill site on the planet has confirmed that they are easily getting 1350-1500mhz on the cards depending on silicon quality.

>But that's the thing, you can't.
can you elaborate?

the 14nm is a fail

>nvidiamark

>midrange $200 card
>performing close to flagship Fury and 980Ti

yeah ok.

see

>mfw dx12

SMAA vs SSAA
Well I don't know, user

so 1060 or 480?

I talk about the 16nm vs 14nm efficiency

>quantum break

That is such a troll image given 1) the software was capturing frametimes wrong and 2) Nvidia suffered the same behaviour.

1060 obviously

>>implying you can't oc pascal
>But that's the thing, you can't.

wat

Performance gains across the board
What's the issue here?

>Material made by Nvidia's own marketing department with suspicious-as-fuck settings (using a type of AA that doesn't allow async compute among others) and outdated drivers for AMD
>"The truth"
We spent last week reminding you n/v/idiots where this came from and how suspicious the settings used were and you're still reposting in hopes people will buy it?

There was plenty of shilling for the 480 before it came out, but this is almost on a whole different level posting the same thoroughly debunked figures over and over again...

The fact that the gains are marginal for the amount of overclocking required.

>I talk about the 16nm vs 14nm efficiency

You cannot directly compare the efficiency of the fabrication method on two completely different chips. Polaris is low-risk design, optimized for high yields on a brand new (and therefore risky) node.

>you cant oc 1080

480 obviously

1060 = meme
480 = supreme

>Using SMAA to avoid async compute on AMD blowing it the fuck out.

>2 minute benchmark runs.

>Make a 232 mm^2 chip and crank it up so that it competes with chips twice it's size
>Somehow surprised that it's not as efficient as it could have been
Right...

comparing a 14nm card that can actually do everything on hardware
with a 16nm card that miss most of its hardware in favor of power/perf
and yet it needs to be overclocked to almost to max to even pass 480 on stock

Do you have a Nvidia card? It's the BOOST, not the overclocking.

>trusting benchmarks before the card is released

RX 480 was supposed to not be that good, but in the end with the drivers and the non reference GPU coming we're probably going to have a very good GPU for the price. You can't compare them at this point, wait a month or two so you'll have something more reliable.

If you want to gimp 3Dmark further towards Nvidia just leave your card running at 100% fan speed for 10 minutes then run 3Dmark with the demo before the benchmark disabled.

You do understand that 1) those are boost clocks and 2) boost is inherently a form of automatic overclocking until the card hits either its bios defined temp limit or power draw limit.

Plus overclocking Nvidia cards is essentially getting the card to always run at a given frequency, rendering the point of boost irrelevant.

but can i buy a 1060 for $350 CAD on tuesday?

>Plus overclocking Nvidia cards is essentially getting the card to always run at a given frequency, rendering the point of boost irrelevant.

wut? no it doesn't. overclocking an nvidia gpu through afterburner would just tack the extra clock speed onto whatever boost clock your card is running at under load.

>Plus overclocking Nvidia cards is essentially getting the card to always run at a given frequency, rendering the point of boost irrelevant.

>has never owned a nvidia card
>shills against it for free
Overtime

*everytime

DELETE THIS

my wallet is dead for buying nvidia shit.

>looking at those bullshit charts
>480 so close to a fury x
lol, fucking horseshit

Saved.
Fuck, he's good.

None of these benchmarks compare 1060 vs 480 on Vulkan Doom with async compute turned on.
Did Nvidia tell these guys they'll be sued if they actually benchmark the cards, or something?

You should see the shitstorm he caused over on OCN when ashes of the singularity made news for its heavy usage of async compute. In a nutshell the Nvidia crowd told mahigan that he is wrong and a fucking shill and that other DX12 titles that aren't AMD propganda will show Nvidia's dominance.

As we now know that didn't happen and mahigan called it right last year. Drama follows his posts everywhere primarily because he does do his research and is generally correct in his claims.

This guy got a YouTube? Podcast? Blog? Anything? AMD fanboying aside, he seems really informative. I'd listen to his podcast daily.

steamcommunity.com/app/223850/discussions/0/366298942110944664/

From the horses mouth - async compute is disabled form axwell in Nvidia's driver.

Not to my knowledge - he is just some random dude who posts on OCN and anandtech (he might have an account over at hardocp but lol fuck that place).

They just don't send the review samples anymore if they show them in bad light just like most companies.

Lmao that's a new one AMD subhumans complain about currynigger websites after they've been using them as "credible source" to shill for 480. Ban those Pajeets asap.

Remember when Nvidia blacklisted Tweaktown? Fun times.

>I know what I'm talking about cause I'm a amdfag

the 1060 isn't released....

>You are so full of FM_Jarnis, this is being throughly researched by the best Overclockers in the world.

Holy shit that's a lot of salt.

People expect too much from the benchmark. It does employ async compute, just not in the manner many had hoped after AMD's marketing of the term. AMD has stretched the definition a bit far because they have so much hardware capable of handling it.

I think people are too obsessed with their religion.. I mean brand winning. They'll go to any lengths to make it seem like their product is the best even when it's not.

That benchmark hardly even makes AMD products look bad. The RX 480 was supposed to be faster than GTX 980... and it is. Or maybe it was 970, but I digress. That seems like a pretty good result to me.

It doesn't help timespy is hot on the heels of doom's vulkan patch which uses quite a lot of GCN favouring features which is why you see those enormous jumps (especially the fury x).

With the trend of DX12 titles (and much misunderstood, but hyped async compute) generally running faster on GCN people were hoping to see more gains from timespy given historically 3dmark's benchmarks have been about squeezing as much fro ma gpu as possible.

It is fairly odd in a round about way to have a synthetic benchmark show lesser gains than in real vidya, but here we are.

$420 in europe
so round that up to 420 euros

It's not that odd, synthetic benchmarks hardly ever give you performance that represents games. That's why everyone calls them synthetic.

I don't even look at synthetic benchmark scores for anything other than to compare my card to other cards of the same type to see if it's under-performing or doing better.

I know his name from since that DX12 thing started, wise guy.

>With Async