Sapphire RX 480 Nitro

Oh wow, seems like the Sapphire RX 480 Nitro is pretty shit.
Source: computerbase.de/2016-07/sapphire-radeon-rx-480-nitro-oc-test/

Why didn't they just keep their old design, seems to me like it was much better. The current one seems like complete garbage to me.
Much higher power consumption
Performance doesn't increase a lot
Barely more silent than reference

Sapphire really let me down this time.

Other urls found in this thread:

guru3d.com/articles_pages/anno_2205_pc_graphics_performance_benchmark_review,9.html
youtube.com/watch?v=zU7BibxgQSM
nowinstock.net/computers/videocards/amd/rx480/
anandtech.com/show/5699/nvidia-geforce-gtx-680-review/3
developer.nvidia.com/dx12-dos-and-donts
overclock.net/t/1606224/various-futuremarks-time-spy-directx-12-benchmark-compromised-less-compute-parallelism-than-doom-aots-also
overclock3d.net/reviews/gpu_displays/rise_of_the_tomb_raider_directx_12_performance_update/5
pcper.com/reviews/Graphics-Cards/3DMark-Time-Spy-Looking-DX12-Asynchronous-Compute-Performance
guru3d.com/news-story/nvidia-will-fully-implement-async-compute-via-driver-support.html
overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/2170
forum.beyond3d.com/threads/dx12-performance-discussion-and-analysis-thread.57188/page-28#post-1870218
extremetech.com/gaming/178904-directx-12-detailed-backwards-compatible-with-all-recent-nvidia-gpus-will-deliver-mantle-like-capabilities
twitter.com/AnonBabble

guru3d.com/articles_pages/anno_2205_pc_graphics_performance_benchmark_review,9.html

Shit mutlthread scaling.

Literally no difference between cheap pentium dual core and most expensive intel 8 core cpu

its ok, AMDrones can wait a little longer

www.pcworld.com/article/3098825/components-graphics/sapphire-nitro-rx-480-review-polaris-rethought-and-refined.html?page=10

They say it is pretty good...

Still, where is the release date? It was supposed to be out at 22 July, for hell sake.

I wanted to buy new PC at mid July and now what? Waitng yet another few months because sombody screwed up again, since there are rumors that Nitro wont be even out in August?

Depending on which game you test and which settings you use, you'll see different results.

Yes, I know.

But still, Im pissed off because of delayed release.

I've just seen, that a shitty Arctic Mono can actually cool the RX 480 pretty well.
Source: youtube.com/watch?v=zU7BibxgQSM
Is there an amd partner that allows changing the cooler without loosing the warranty?

>The 4gb version which costs as much as 970 is only 2-3 fps ahead in OC mode and will probably drop 3-5 fps behind when you OC your 970 to 1550 clock speeds.
>8 gb version costs 60 Euro more

AMD can never do it right, another fucking DoA product which only boosted sales of 970 in UK lmao.

Well fuck me sideways but why to fuck the 4GB version is so much weaker? 4gb should be more than enough in 1080p which this card is designed for. Is there something more to it than less Vram like lower clock speeds etc? This doesn't make any fucking sense.

Sapphire said 8GB is better binned, but also more expensive than 4GB

Talking about shitty gimping practices. I really wanted to go for 4GB one since it's actually cheap and seems like a perfect match for 1080p but fuck that.

4GB is only $220 isn't it? Why not go for that

Its actually the best ~200-300 price/performance card. Minding the 4GB limitation (isnt an issue for 1080p).

You can still OC the 4GB close to 1400.

Could you be a friend and delete this?
I just bought one and don't want to look at this.

I guess amdfags can wait for vega.

PCGameshardware says:
>- Doom: Erlaubt erst mit einer 6-GiB-Karte die Schatten- und Virtual-Pages-Stufe "Nightmare"
>- The Division: Neigt ab WQHD mit Ultra-Details zum selektiven Weglassen der höchsten Texturstufe
>- CoD Black Ops 3: Lässt auf einer 4-GiB-Karte nicht maximale Details zu (welche genau, müsste ich nachsehen)
>- Rise of the Tomb Raider: "Sehr hoch"-Texturen stottern mit 4 GiB stark und mit 6 GiB zumindest nach dem Laden und Betreten neuer Gebiete
>- Mirror's Edge Catalyst: Blendet schon mit dem Ultra-Preset auf einer 4-GiB-Karte Details aus, bei Hyper erst recht

which basically says:
Doom highest settings need 6GB
The Division doesn't show some textures starting from WQHD
BO3 no maximum details on 4GB
RotTR: 4GB cards stutter, 6GB cards stutter after loading a new area
MEC: Ultra: Textures are left out, on "Hyper" even more

If you plan to use your card for 3-5 years and want to play Triple A games, get as much fucking VRAM as possible.

Why are the requirements skyrocketing recently?

Games dev/GPU are trying to push the 8GB mark for higher res textures

I remember 6-7 years ago people said 1GB is all you'll ever need for 1080p. Lol.

>within margin of error of custom 1060
>shit

People were saying that 2GB is fine two years ago. Dual core CPUs too.

The 1060 is also trash though.

Stupid 4gb is enough of mid range gpu, if any new AAA game that use more then that you goin to get sub 30fps anyway, regardless of your vram

Plain 4 core is getting phased out too. Don't buy an i5 anymore.

>buying a pc
I hope you mean making one, friend.

I pre-ordered mine on Amazon.com but lord knows when they'll actually ship. Waiting to get one from Newegg so I can cancel the order and get it faster.

Keep checking
nowinstock.net/computers/videocards/amd/rx480/
and cross your fingers. They added the Nitro this morning on pre-order. But other than that, the only alerts that I've been getting are for the fucking reference card. Who are the retards buying reference? Smh

The thing is that high resolution textures need a lot of VRAM, but next to no performance. The old consoles had next tot no VRAM, rather more performance (well, for the time they were released anyway. That's why you didn't need a lot of VRAM in these days, developers just didn't bring high res textures, as they'd be a waste on a large part of the audience.
The new consoles however have a lot of VRAM, but not a lot of raw performance, that's why developers started to use better textures to make the games look better (which was really necessary, PS3 era textures look horrible. An emulated, FF12 with increased internal resolution looks barely worse than FF13)
I don't know if that's the only reason, but it seems logical to me and certainly is part of the problem.

the easiest way to futureproof most computers nowadays is to stack a bunch of ram in it
i dont regret refunding my r9 290 vapor-x 4gb for r9 390

>People were saying that 2GB is fine two years ago

I do wonder how fucking salty kepler owners are now that tahiti murders the 770. 680 and lower.

Intel won't be bumping up the core count on their consumer chips for a very, very long time so i5's will remain quad core for the forseeable future and realistically thats what gamers buy as few go balls out for an i7 (or if they are retarded, an enthusiast i7).

I was wondering when the poorfags would show up

What a fucking surprise

Another
Major
Disappointment

The 1060 is just better than the 480 in every way.

I'm going with 6 core Skylake-X when it comes out. Already some games prefer 4/8 in DX12.

>6GB card

HF being obsolete in two months

the reason why nvidia had stronger dx11 drivers was because they were multi-threaded which helped lower driver overhead since it could be spread out across multiple threads.

but another reason was because of their use of a software scheduler.

one of the reasons why fermi ran so hot was because it utilized a hardware scheduler, just like all amd gcn based cards do. hardware scheduling draws a lot of power and more power means more heat. why did they use a hardware scheduler? a hardware scheduler will always be faster than a software one. less overhead, and the gpu can do it much faster than software.

the problem with a hardware scheduler? once built, you cannot modify it. you have to build a whole new card if you update the hardware scheduler.

but nvidia wanting to move on from their house fire fermi's decided to remove hardware based scheduling with keplar and beyond. this is the main reason why keplar used far less power and ran cooler than fermi. nvidia realized with dx11, you didn't need a complex hardware scheduler. most of the scheduler went under utilized and was overkill. with dx11 multi-threading capabilities, and making their drivers multi-threaded, it help alleviate a lot of the driver overhead one would endure with utilizing a software scheduler. in turn this gave them the opportunity to now have more control over scheduling. able to fine tune the drivers for individual games. well, they had to. this caused a lot of work on nvidia's driver team, but it helped them max out every ounce of juice they can get from their cards and lower power and reduce heat.

maxwell continued this by removing more hardware based scheduling.

the problem? dx12 and vulkan need a hardware scheduler to be taken full advantage of. you need it for the complex computations of async and to manage compute + graphic operations at the same time. they're complex, and you need the performance.

and this is why nvidia cards cannot do async properly. not only do they not have the hardware needed to run compute + graphics at the same exact time, but they lack the complex, high performance hardware scheduler to run them. their hardware can only do compute or graphics one at a time. with pascal nvidia did some tweaks to help speed up the switching between compute and graphics, but it still isn't optimal. its a bandaid. pascal still comes to a crawl if it recieves to many compute + graphic operations. it cannot swith fast enough.

whats funny is nvidia knew what they were doing. they just didn't think compute was ever going to be useful in graphics and games.

here's a nice article from keplar's launch done by anandtech:
>anandtech.com/show/5699/nvidia-geforce-gtx-680-review/3

>GF114, owing to its heritage as a compute GPU, had a rather complex scheduler. Fermi GPUs not only did basic scheduling in hardware such as register scoreboarding (keeping track of warps waiting on memory accesses and other long latency operations) and choosing the next warp from the pool to execute, but Fermi was also responsible for scheduling instructions within the warps themselves. While hardware scheduling of this nature is not difficult, it is relatively expensive on both a power and area efficiency basis as it requires implementing a complex hardware block to do dependency checking and prevent other types of data hazards. And since GK104 was to have 32 of these complex hardware schedulers, the scheduling system was reevaluated based on area and power efficiency, and eventually stripped down.

>The end result is an interesting one, if only because by conventional standards it’s going in reverse. With GK104 NVIDIA is going back to static scheduling. Traditionally, processors have started with static scheduling and then moved to hardware scheduling as both software and hardware complexity has increased. Hardware instruction scheduling allows the processor to schedule instructions in the most efficient manner in real time as conditions permit, as opposed to strictly following the order of the code itself regardless of the code’s efficiency. This in turn improves the performance of the processor.
>Ultimately it remains to be seen just what the impact of this move will be. Hardware scheduling makes all the sense in the world for complex compute applications, which is a big reason why Fermi had hardware scheduling in the first place, and for that matter why AMD moved to hardware scheduling with GCN. At the same time however when it comes to graphics workloads even complex shader programs are simple relative to complex compute applications, so it’s not at all clear that this will have a significant impact on graphics performance, and indeed if it did have a significant impact on graphics performance we can’t imagine NVIDIA would go this way.
>What is clear at this time though is that NVIDIA is pitching GTX 680 specifically for consumer graphics while downplaying compute, which says a lot right there. Given their call for efficiency and how some of Fermi’s compute capabilities were already stripped for GF114, this does read like an attempt to further strip compute capabilities from their consumer GPUs in order to boost efficiency. Amusingly, whereas AMD seems to have moved closer to Fermi with GCN by adding compute performance, NVIDIA seems to have moved closer to Cayman with Kepler by taking it away.

important part here:
>NVIDIA is pitching GTX 680 specifically for consumer graphics while downplaying compute
>downplaying compute

it's also why in nvidia's "dx12, does and don'ts" they state not to run to many compute + graphic operations at the same time.
>developer.nvidia.com/dx12-dos-and-donts

their hardware cannot handle it. while amd's gcn not only can, but shines brighter when its under heavy async load.

here's some more interesting reads on nvidia's async debacle:
>overclock.net/t/1606224/various-futuremarks-time-spy-directx-12-benchmark-compromised-less-compute-parallelism-than-doom-aots-also

yes its mostly focused on the time spy issue regarding their usage of async, but it does dwell into nvidia's architecture limitations.

also the use of the hardware scheduler is why amd gpu's used more power and ran hotter than nvidia's since the keplar and gcn 1 days. if nvidia slapped a hardware scheduler on pascal, their gpu's would not just use as much, but most likely use more than amd's since nvidia is on 16nm instead of 14nm like amd. granted, not much more by being on 16nm vs 14nm, but a little nonetheless.

>In the previous pages, we compared the performance of Rise of the Tomb Raider's original Direct 12 patch and the performance of the game's newest DirectX 12 implementations, seeing a higher minimum framerate performance in the majority of cases and improved performance in all cases for AMD's R9 Fury X GPU.

>Now we will compare the DirectX 12 and DirectX 11 versions of the game with this new patch, as while the DirectX 12 version has improved we need to know if this new version actually provides users better performance than what we can achieve with the older DirectX 11 API.

>With AMD's R9 Fury X we see a performance improvement when using DirectX 12 in all cases, whereas Nvidia's GTX 980Ti actually sees a performance decrease in all cases except 1080p performance, where we expect that the CPU performance benefits of DirectX 12 may have had more of a benefit than any potential gains in GPU performance.

>All in all it seems that those with AMD GCN 1.1 or newer GPUs will be better off playing Rise of the Tomb Raider in DirectX 12 whereas Nvidia users are better off using DirectX 11.

>overclock3d.net/reviews/gpu_displays/rise_of_the_tomb_raider_directx_12_performance_update/5

whats important to note is that rise of the tomb raider is a nvidia sponsored, and nvidia gameworks title. so yes, the 980 ti did come out ahead at 1080p, and can argue hurr dx12 don't matter, what's important to note how nvidia didn't benefit from dx12 at all, and in higher resolutions, suffered regressions.

>pcper.com/reviews/Graphics-Cards/3DMark-Time-Spy-Looking-DX12-Asynchronous-Compute-Performance

when we take a look at time spy we can see some pretty interesting results.

when we look at the total % increase with async on & off one thing is made clear, amd wins hands down. even the humble $200 480 nets a higher increase in performance with async on than the 1080. maxwell flat out did not receive a boost at all.

there's a reason for that. according to pcper:
>Now, let’s talk about the bad news: Maxwell. Performance on 3DMark Time Spy with the GTX 980 and GTX 970 are basically unchanged with asynchronous compute enabled or disabled, telling us that the technology isn’t being integrated. In my discussion with NVIDIA about this topic, I was told that async compute support isn’t enabled at the driver level for Maxwell hardware, and that it would require both the driver and the game engine to be coded for that capability specifically.

which shouldn't come to a surprise, maxwell can't truly do async at all. its terrible at switching back and forth between compute and graphics as noted above. pascal does bring some improvements with this regard but there is more to the story.

the problem with time spy is that it doesn't fully take advantage with async. their designed async in that benchmark to the way nvidia stated in their "dx12 do's and don'ts."
>Try to aim at a reasonable number of command lists in the range of 15-30 or below. Try to bundle those CLs into 5-10 ExecuteCommandLists() calls per frame.

as noted above with the overclock.net link, time spy doesn't fully utilize async. it doesn't use a lot of it. it also doesn't use a lot of parallelism, meaning its not throwing out a lot of compute & and graphics operations at the same time. it feeds it mostly compute, sending a few compute operations at once, then switches to a little graphics, then back to compute. it does it in a way that doesn't over saturate pascal's dynamic preemption.

and see:
>guru3d.com/news-story/nvidia-will-fully-implement-async-compute-via-driver-support.html
>overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/2170
>The Asynchronous Warp Schedulers are in the hardware. Each SMM (which is a shader engine in GCN terms) holds four AWSs. Unlike GCN, the scheduling aspect is handled in software for Maxwell 2. In the driver there's a Grid Management Queue which holds pending tasks and assigns the pending tasks to another piece of software which is the work distributor. The work distributor then assigns the tasks to available Asynchronous Warp Schedulers. It's quite a few different "parts" working together. A software and a hardware component if you will.

>With GCN the developer sends work to a particular queue (Graphic/Compute/Copy) and the driver just sends it to the Asynchronous Compute Engine (for Async compute) or Graphic Command Processor (Graphic tasks but can also handle compute), DMA Engines (Copy). The queues, for pending Async work, are held within the ACEs (8 deep each)... and ACEs handle assigning Async tasks to available compute units.

.Simplified...

>Maxwell 2: Queues in Software, work distributor in software (context switching), Asynchronous Warps in hardware, DMA Engines in hardware, CUDA cores in hardware.
GCN: Queues/Work distributor/Asynchronous Compute engines (ACEs/Graphic Command Processor) in hardware, Copy (DMA Engines) in hardware, CUs in hardware.

and funny thing, to this day, nvidia has never enabled async support on maxwell.

>forum.beyond3d.com/threads/dx12-performance-discussion-and-analysis-thread.57188/page-28#post-1870218

>As soon as dependencies become involved, the entire scheduling is performed on the CPU side, as opposed to offloading at least parts of it to the GPU.
>If a task is flagged as async, it is never batched with other tasks - so the corresponding queue underruns as soon as the assigned task finished.
>If a queue is flagged as async AND serial, apart from the lack of batching, all other 31 queues are not filled in either, so the GPU runs out of jobs after executing just a single task each.
>The graphic part of the benchmark in this thread appears to keep the GPU completely busy on Nvidia hardware, so none of the compute tasks is actually running "parallel".

>I wouldn't expect the Nvidia cards to perform THAT bad in the future, given that there are still possible gains to be made in the driver. I wouldn't exactly overestimate them either, though. AMD has just a far more scalable hardware design in this domain, and the necessity of switching between compute and graphic context in combination with the starvation issue will continue to haunt Nvidia as that isn't a software but a hardware design fault.

Amd card should be tested with Amd processor.
Not with some nvidia lover intel cpu.

Pascal does not support Asynchronous compute + graphics but contrary to Maxwell... Pascal added improved preemption as well as a more refined load balancing mechanism. So what Pascal can do is execute a Compute and Graphics task in serial (Software based scheduling) and assign these tasks to two separate GPCs. So while one GPC is handling the Compute task... another handles the Graphics task. While one GPC is filled with Compute work... it cannot do Graphics work and vice versa. The kicker is that the improved preemption allows Pascal to flush a GPC of work quickly (something Maxwell did not support) allowing a GPC to move from a Compute task to a Graphics task more quickly than Maxwell (faster context switching). This takes CPU time (the scheduler is software based and thus takes CPU time) and you are also limited by the number of GPCs on the GPU. If the workload becomes too heavy then you run out of GPCs and a performance hit ensues.

>this autistic shill again

The first feature nVIDA introduced is improved Dynamic Load Balancing. Basically.. the entire GPU resources can be dynamically assigned based on priority level access. So an Async Compute + Graphics task may be granted a higher priority access to the available GPU resources. Say the Graphics task is done processing... well a new task can almost immediately be assigned to the freed up GPU resources. So you have less wasted GPU idle time than on Maxwell. Using Dynamic load balancing and improved pre-emption you can improve upon the execution and processing of Asynchronous Compute + Graphics tasks when compared to Maxwell. That being said... this is not the same as Asynchronous Shading (AMD Term) or the Microsoft term "Asynchronous Compute + Graphics". Why? Pascal can’t execute both the Compute and Graphics tasks in parallel without having to rely on serial execution and leveraging Pascal’s new pre-emption capabilities. So in essence... this is not the same thing AMD’s GCN does. The GCN architecture has Asynchronous Compute Engines (ACE’s for short) which allow for the execution of multiple kernels concurrently and in parallel without requiring pre-emption.

They show it outperforming the factory OC MSI R9 390X in Black Ops III, Fallout 4, and almost matching it in The Witcher 3.
Anno 2205 is just a shitty game.

Intel's 2018 Coffee Lake platform is a 6 core die with Iris Pro IGP.
No word as of yet if this is a special SKU like most prior Iris Pro equipped parts, or if it will be made mainstream. At the very least I could see it having a premium of their mainstream quad core $350~ i7s without yet stepping into i7E territory.

What is pre-emption? It basically means ending a task which is currently executing in order to execute another task at a higher priority level. Doing so requires a full flush of the currently occupied GPC within the Pascal GPU. This flush occurs very quickly with Pascal (contrary to Maxwell). So a GPC can be emptied quickly and begin processing a higher priority workload (Graphics or Compute task). An adjacent GPC can also do the same and process the task specified by the Game code to be processed in parallel (Graphics or Compute task). So you have TWO GPCs being fully occupied just to execute a single Asynchronous Compute + Graphics request. There are not many GPCs so I think you can guess what happens when the Asynchronous Compute + Graphics workload becomes elevated. A Delay or latency is introduced. We see this when running AotS under the crazy preset on Pascal. Anything above 1080p and you lose performance with Async Compute turned on.

Both of these features together allow for Pascal to process very light Asynchronous Compute + Graphics workloads without having actual Asynchronous Compute + Graphics hardware on hand.

So no... Pascal does not support Asynchronous Compute + Graphics. Pascal has a hacked method which is meant to buy nVIDIA time until Volta comes out.

extremetech.com/gaming/178904-directx-12-detailed-backwards-compatible-with-all-recent-nvidia-gpus-will-deliver-mantle-like-capabilities

>no argument against him
The only shill here is you.

>4 frames fatser than reference
>3 times louder and 50watts more power
>1060 still dumpstering it

Top fucking blunder

Does AMD card have driver level scheduling?

I reckon the reason AMD cards do shit on low tier CPU is because AMD relies on the CPU to do their scheduling, thus the "driver overhead".

The OP's game for example is Anno 2205. It can only use 2 core and performance ceiling is hit. If AMD tries to rely on the CPU cores for scheduling, it slows down tremendously because those two cores are already limited.

The reason Nvidia cards do better on low end CPU seems to be because they can do their scheduling on their cards/cuda cores. Sorta like how they can run PhysX on their cards and AMD is forced to rely on CPU computation.

AMD needs have better driver that can utilize their GPU more and less on CPU.

when we look at picture related with doom - vulkan you might notice something, the 1060 wins with weaker processors, but loses to the 480 with stronger processors.

there is another thing to note here as well. when we look at the x4 955, the gains between the 480 and 1060 are near identical. ~10+ fps increase. move over to the i5 750? the 480 receives higher gains than the 1060. 480 increases by an extra 18fps while the 1060 receives 12fps extras. whats important to note here is that the 1060 received the SAME amount of fps boost with the i5 750 as it did with the x4 955. when we move over to the i7 6700k is when the 480 skyrockets over the 1060. 480 receives 32fps increase while the 1060 receives a small 3fps increase.

when we look at extremetech.com/gaming/178904-directx-12-detailed-backwards-compatible-with-all-recent-nvidia-gpus-will-deliver-mantle-like-capabilities we find that both amd and nvidia receive equal playing field when it comes to driver overhead. amd no longer suffers from extreme driver overhead. both are similar now with the usage of dx12 and vulkan.

so whats going on here?

in doom - vulkan, async is enabled on amd cards and async is used HEAVILY in doom - vulkan. while on nvidia cards async is disabled. when we look at the x4 955 processor, its simply to weak to feed the 480 its async ques. all its benefiting from is the less driver overhead. the same as nvidia, less driver overhead. when we move up to the i5 750, we see a stronger increase with the 480 than we do with the 1060, 16fps vs 12fps increase. the i5 750 enters a level where its powerful enough to allow the 480 to start benefiting from async. when we finally move over to the 6700k its powerful enough to completely keep up with the 480 processing all of the async and allows it to deliver amazingly strong performance.

well what i wrote states amd does not have any software based scheduling. only nvidia does.

the problem, amd's dx11 drivers are single threaded while nvidia's are multithreaded.

tl;dr: the difference is irrelevant, just get the card with more LEDs

Sapphire was my only choice for the 480. gonna grab another thing now, 480 is done

Hell you cant even buy one for the next 3 weeks

nvidia on the other hand doesn't have async enabled. id disabled it since it gives nvidia cards a regression and they're waiting for nvidia to release a driver to reenable async for nvidia cards so the only benefit nvidia is getting is the general less driver overhead. which is why nvidia gets a bigger boost with older cpus and not newer, stronger cpus. the older ones cannot keep up with the driver overhead, so switching over to dx12 frees up a lot of resources for older cpu's while the 6700k is strong enough that it doesn't matter so nvidia see's less of a boost.

thats why you'll notice the stronger the processor becomes, the less of a boost the 1060 receives, and the higher the boost the 480 starts to receive.

gcn is built to be fed, and utilize async. the more you feed it, the more powerful it becomes. give it a ton of things to do (compute AND graphics at the same time) and it shine. vulkan / dx12 will always give amd a boost regardless.

this is NOT STATING you need a i7 6700k, or a i7 at all, take advantage of gcn (the 480). as you noticed, the i7 750 from 2009 started to give amd the boost it needed to take advantage of async. so any i5 processor since sandy bridge will allow the 480 to reap the benefits of async.

>tldr
you don't need a 6700k. you don't need skylake. all you need is a 2500k or higher when dealing with high async, compute + graphic, load.

Really, someone made some bench with amd cpu?

Overhead is real. AMD uses i7 for their slides. That should tell you something.

msi 1060 literlay half as loud
and consumes less power
and has better performance
which is even better on slower CPUs

the EU prices are
250€ Sapphire rx480 4 GB
270€ reference rx480 8 GB
320€ Sapphire rx480 8 GB
340€ MSI gtx 1060

I cannot justify spending 40% more for silence, because 4GB would be enough for me performance-wise since I'm happy with 1080p@30fps
but i dont want a fucking 44dB helicopter in my case

New games are pushing 8gb VRAM, even in 1080p. Now that we know that the RX 480 is a complete flop, what are we left to do? The GTX 1060 only supports a maximum of 6gb VRAM, which means that card is worthless too.

Seriously, is the GTX 1070 the only low cost solution right now?

No wonder people don't want to fuck with PCs anymore.

which is better gtx 1060 palit or gainward?

EVGA

Meanwhile I still can't get a 1080 because they sell out instantly when they get in stock.

Good, they are moving forward to DX12 and 8 thread usage.

They still have overhead in dx12 and vulkan, dipshit. AMD are this incompetent.

>Why are the requirements skyrocketing recently?

Consoles have 8gb ram now.

Here is a question for all of you AMDrones and Nvidiots constantly bickering about the best mid range GPU.

Do you own or intend to purchase a G-Sync/Freesync monitor?

If you already have a G-Sync/Freesync monitor it's a no brainer as to which 'team' you support. But if you have just a bog standard 60Hz monitor anything over 60 fps is meaningless anyhow. Unless you intend to get a G-Sync/Freesync monitor later on. In which case why are you buying a mid range GPU? Why are you not looking forwards and buying a GPU that supports 1440P? A mid range GPU such as the GTX 1060 or RX 480 performs perfectly fine for 1080P in most games. But anything over 60 fps should not really matter. All this bullshit about the 1060 does 78 fps in RotTR and 65 on the RX 480 makes no fucking difference for 60Hz monitors and you should be locking it to vsync or capping fps anyhow.

If you don't own a Freesync display and are not planning on changing your monitor any time soon then I would say go for the GTX 1060 based purely on power usage/noise and nothing else. Well maybe the cost and availability as well depending on where you live.

It's not driver overhead that makes those better cpus boost the gpu performance in Vulkan - it's the fact that Vulkan scales better with more cores.

Fact of the matter is, if FreeSync/GSync is calculated into buying buying, most would probably choose AMD. No one wants to pay the extra $150 tax for GSync monitors.

Oh and if anyone gives me shit about fps in certain scenes below 60 fps I dare you to say you can tell the difference between 30-59 fps. Both GPU's will rarely go below 30 anyhow unless your settings are stupidly set to max on everything.

the other problem was also how amd's hardware worked. its a very parallel hardware (think multi-threaded). you have things like aces and cu's that cannot both be used in dx11. so you always have "half" the card going under utilized.

gcn was made for a multi-threaded, parallel api like dx12 and vulkan. it was not made for the single threaded nature of dx11.

while keplar, maxwell, and pascal are made for single threadedness of dx11, not dx12. they lack a hardware scheduler which helped lower their tdp, their drivers where multi-threaded to handle the software scheduler and as a byproduct, overall less driver overhead. they cannot do things in parallel very well. queue up to many things and they come to a halt. they can only do one thing at a time, and stop and switch to another. which was exactly how dx11 was designed. while gcn can do many different things at the same exact time.

which is why amd has stronger gains in dx12 over nvidia. its not just driver overhead, but its also their hardware is better suited for the parallel, multi-threadedness of dx12 low level api.

>ultra settings
>on a $200 gpu
>a gpu specifically marketed as the lowest entry point for VR

No one does this
Nor do they expect to be able to do this
4gb vram will run high settings in any 1080p game perfectly fine for years, then you can get a HBM gpu with 32gb vram @ 5pb/s

Look at these mad gains for the 8370.

not exactly true.

there is a limit to how much it will scale.

as pic related shows.

while i see others show another small boost going to a six core from a quad, but no boost going from a six core to a eight core.

it really depends on the game.

but overall, yes, dx12 & vulkan does scale better with multi-core cpu's if taken advantage of in comparison to dx11. MUCH better scaling. a quad will do MUCH better on dx12 than it will on dx11 and vice versa.

mad gains indeed because bulldozer and piledriver have awful single threaded performance.

not as strong gains on the intel's because their single threaded performance is strong enough to handle it.

If you have a 60Hz monitor buy a GTX 1060 and STFU. If you have either a Freesync or G-Sync monitor your choice has already been made so again STFU.

Well do cconsider the 8370 is just an 8350 that turbos slightly higher and it is competing against the current mightiest consumer i7 reasonably well. I would like ot see a 9590 in that test as I bet it would be damn close to the 6700k simply due to its insane clockspeed.

what this guy said.

the gains should be greater on processors with weaker single threaded performance and already a bottleneck in the game.

gains should be little if its a processor already strong enough and not a bottleneck.

doing gods work user

I wonder if he is Sup Forums's version of mahigan.

>same pasta and screens
>literal shill

>explains shit
>cites sources
>shill

I think you might not understand what the word shill means or are salty negative things are said about Nvidia.

All a nvidiafags have to do is either ignore or deny to make your argument into a conspiracy.

Its the latter, of course. Nvidishills can't understand why anyone would say anything bad about their paymasters.

>Thinks anyone is going to read this shit
>He is literally so fucking pathetic that he writes a full blown research to prove some few random strangers that they're wrong.
>His sources are shill based benchmarks and speculations anyway

Wew lad.

Uhh dips to 30-40fps are easily noticeable and this is coming from someone that thinks 120hz is mostly placebo

See for an Nvidiot in its natural habit. Don't get too close lest we disturb the creature - the mother might not take it back.

But both the 1060 and 480 will do that anyhow so it's kinda moot.

AMD vs Nvidia is like the new PS4 vs XB1

I can, anything under 45 fps starts looking super clunky to me.

Do you own either a G-Sync or Freesync monitor? No? Then get a GTX 1060 and STFU.

Nobody asked you anything fagboy. Sit down and shut up.

>wait until rx 480
>wait until drivers
>wait until non-reference rx 480
>wait untill drivers
>wait until zen
>wait until drivers
>wait until rx 490
>wait for drivers
>wait for non reference rx 490
>wait for drivers

Never ending circle of AMD fanboys, i'm glad i bought gtx 1060 and can finally play witcher 3 at 60fps on ultra or doom at 90-150fps.

>Nvidia tax

As opposed to waiting basically forever for the async driver?

you are trash kiddo,

Gtx 1060 > rx 480

All I am saying is people are arguing over fucking nothing. If you are a G-Sync/Freesync owner then you have already painted your color to your shirt. If you own a 60Hz monitor then pick the one that is cheaper and available. If that is the GTX 1060 then better still since you gain in power usage/fan noise.

pretty much.

dx11 didn't benefit from a hardware scheduler so nvidia was right, even though it was more powerful, it went underutilized for dx11. dx11 also didn't support things like full blown async to take advantage of compute, let alone compute at the same exact time as graphics (compute + graphics).

so a software scheduler was more than enough for dx11 since it was very singular. pair it up with multi-threaded drivers to offload it to as many threads as you can, and the driver overhead is reduce to sane levels. dx11 did of course have some level of multi-threadedness, but nothing major and most importantly, nothing that made it a parallel in nature.

and nvidia did all of this at first to simply lower tdp.

amd betted on the industry, and future api's going multi-threaded and parallel. most importantly, the addition of compute to offload a lot of the computational work and being able to process both compute & graphics at the same exact time. they got tired of waiting and forced the industries hand with mantle by showcasing a low level api that offered excellent multi-threaded performance, more direct access to the hardware, and most importantly, parallelism.

all things considered, if nvidia tossed back a hardware scheduler their tdp will go up. the 1060 would probably match the 480 in tdp if it had one. the 980 ti would of been an equal house fire as the fury x.

Why the fuck is the selection of gsync screens so limited compared to freesync? Both in resolution and screen size? A fuckoff huge *sync screen is lovely but most gsync screens barely break 27" or so.

AMD users like to compensate ;)

Because Gsync is far more strict about their screen requirements. I think every one must have a 24hz to 60hz/144hz range.

retard, consoles have 5-6gb of shared vram, pc like ram has noting to do with it

What about someone like me who is looking to buy a new monitor? I'd go for the 1060 but lack of futureproofing worries me and why should I pay $100 extra for Gsync. Doesn't seem worth it.