GTX 480 DX12

Since the GTX 480 had a Full hardware scheduler, that means it can run hardware asynchronous compute right?

will dx12/Vulkan make a GTX 480 a valid card again?

Other urls found in this thread:

youtube.com/watch?v=P7RSszCPt68)
twitter.com/SFWRedditGifs

Nvidia lied. They said everything back to fermi would receive DX12 support but so far they have not actually released a supporting driver and the main reason I suspect (as you say) is because the housefire inducing hardware scheduler would make their newer aarchitectures look not so hot in comparison.

Do note fermi still wouldn't do async compute in the same way GCN does because of differences in design.

>will dx12/Vulkan make a GTX 480 a valid card again?
No.

Enjoy your intentionally gimped performance

Nvidia are just sweeping it under the carpet and hoping nobody remembers.
To be honest, it would be pretty pointless going to the effort now.

Relying on Cuda 100% despite most compute functions relying on shit that OpenCL brought to the table was a bad call, especially since AMD had their compute SDK's out in the open since the 5000 series

wut? Who cares about compute?

kepler and beyond has grid management unit, hyperq and dynamic parallelism tech that is something similar to amd async engines, and it's current improvement is what we see now on Pascal dynamic load balancing, except for nvidia they were able to exploit it without dx12 or vulkan and has been there ever since.

You don't see much improvement dor nvidia on async because they have already been very efficient in handlng tasks in the beginning with their drivers and with their existing tech. In DX12 and vulkan, the developers are now responsible for this hence they need to fine tune their program for specific gpu to run optimally.

This isn't Sup Forums, a lot of people care about compute outside gayming.

If Async is so important why does only Nvidia get used for Deep Computing and all that science shit while AMD's only use is cheap gaymen and buttcoin mining?

>a lot of people care about compute
explains why Nvidia has 80% marketshare and almost all professional applications use CUDA. People really care about compute.

>giving a shit about Async

'muh async' has nothing to do with the scheduling anymore. it's just a catchall buzzword that tech illiterate children regurgitate in order to reference gcn focused optimization that is happening in all of AMD's recent sponsored games.

basically: AMD is trying to copy the success NVIDIA had with gimpworks and are trying to market it as something that it isn't.

>AMD trying to copy Gimpworks with an open source solution
you may be more tech illiterate than you think, friendo

GPUopen is completely open source, Nvidia could optimize their shit for it if they wanted to, they just refuse to because either theyre too prideful(Like they were with FreeSync), or theyre waiting for Volta to support Async so Pascal users can find a reason to "upgrade"

remember, a GPU upgrade a year keeps the goyim's wallet clear

This so much
Async is all AMDrones have to lessen the blow that even Nvidia's weakest gpu kills their flagship RX480 in performance

>GPUopen is completely open source,

learn what you're talking about before posting. only the effects that are part of GPUopen are open sourced, the rest is all proprietary/closed. this includes AGS and the explicit crossfire API extensions.

>Nvidia could optimize their shit for it if they wanted to,

AMD can also optimize for gimpworks, that doesn't mean I'm going to support unethical behavior.

>or theyre waiting for Volta to support Async so Pascal users can find a reason to "upgrade"

pascal already supports >muh async.

...

the '''advantage''' amd has in dx12/vulkan doesn't mean shit if every other frame spikes over 50ms. the stuttering is horrible on all the recent GCN cards.

Doesnt change the fact Nvidias GPUs are much more powerful than anything AMD has to offer

you can regurgitate "muh async" all you want, facts are facts

>almost all professional applications use CUDA.
They're not all adobe. My applications use DirectX and FP64 is important. Unfortunately even workstation cards have gimped double precision these days, unless you for for high end cards.

>AMD can also optimize for Gimpworks
Stopped reading right there
learn some tech literacy and copyright laws before you post, friend

>more powerful than anything AMD has to offer

If you are going to cite TFLOPS as what determines power you will be surprised.

>Doesnt change the fact Nvidias GPUs are much more powerful than anything AMD has to offer
They are not, raw performance wise AMD still wins

>not realizing that most optimization happens at the shader assembly level

NVIDIA does shader replacement at the driver level with their own optimized versions and AMD has been rewriting shaders as highly optimized GCN-specific bytecode (see: youtube.com/watch?v=P7RSszCPt68) for sponsored devs.

Which is why GTX 1080 and 1070 bootyblast AMD in pretty much anything ;)
but keep justifiying why you're glad you didnt get those grapes, fox

lol

>AMD can also optimize for gimpworks

You realize gameworks isn't open source, right? AMD cannot optimize for it

alright guise
I just got sapphire nitro+ 4GB 480 OC and turns out I don't have 8 pin connector
I put in 6 pin and it sort of works on default windows driver, but screen turns off when I install any version of AMD drivers
I'm going to buy 8pin cable tomorrow

This DX12 shit is sounding more and more like a glorified meme to me as time goes on.

Weren't devs saying how much harder it is to implement async and stuff like that? Why would anyone bother going out their way to relearn shit and implement something that is harder and takes more time to do right only so the tiny marketshare represented by AMD can take advantage because they don't know how to make drivers without massive CPU overhead?

I simply don't get it.

>Which is why GTX 1080 and 1070 bootyblast AMD in pretty much anything ;)
On muh gayms they do, on simple stuff like memecoins or simple OpenCL kernels Fury X beats the 1080, despite being a previous gen card
>You realize gameworks isn't open source, right? AMD cannot optimize for it
They actually can, by clean room reverse engineering it, it's obviously ridiculously expensive

>synthetic benchmarks

Might as well start defending bulldozer while you're at it.

Amd openCL driver is buggy as hell.

>memecoins
AMD on cryptocurrency is like one of those low functioning autistics that can do advanced mathematics in their head.

>simple OpenCL kernels
i.e. not real world performance

>rendering is now synthetic

Ironically rendering is something bulldozer is actually quite good at.

>muh memecoins
again, fox and grapes

You said performance, not "real world" which seems that by your standards means only GameWorks games
In F@H my AMD/ATi setups have always outperformed Nvidia cards by a lot, in suites like Adobe/Autodesk pretty much anything but a pre-2010 toaster works well enough
But that's probably not real world because it's not games, right?

AMDrones try to justify their purchase, the thread.

Nvidia user: my gtx 1080 is great because it plays any game I want! Im glad I got it.
AMD user: Id like to interject for a moment but AMD is actually superior to Nvidia in performance in *insert game nobody gives a shit about here*, granted by its power over Nvidia in *insert meme technology nobody gives a shit about here*, thus proving that all benchmarks where Nvidia has superior performance are just a sham, therefore I am glad I got my *Piece of shit housefire GPU* instead of a Nvidia because I am not enlightened by some phony God, but my own intellect.

>This DX12 shit is sounding more and more like a glorified meme to me as time goes on.

Congratulations, you've finally realised what everyone who isn't an AMD shill did months ago

>Autodesk pretty much anything but a pre-2010 toaster works well enough
No, modern consumer cards suck shit.

>turns out I don't have 8 pin connector

Why would you buy something without checking if you can use it first?

because i /do/ have 8 pin connectors that came with power supply, but they are somewhere in one of the hoards in my messy apartment and I couldn't find them
I thought I could, but I couldn't

So basically you are an idiot?
Fucking find them.

Just jump the sense cable to the right pin. I'm surprised your card didn't come with an 8 pin adapter.

I'm surprised as well
The box has nothing but three pieces of paper and the card itself
Looks like there was a real race to the bottom for them wrt price

>don't want to support nvidia business practices
>don't like AMD products
Why can't someone else throw their hat into the ring?

Wasn't Matrox supposed to come back or something?

A lot of reasons, surely you can think of them, right?
They're pretty obvious.

you could buy a via chip. comes with S3 graphics even.

:^)

Liar, it comes with a driver disc and some faggy purple pamphlet

>will dx12/Vulkan make a GTX 480 a valid card again?
I had sli 480s. If you use more than 1 monitor, they won't go into idle, which heats up your room like crazy and burns power. They also sound like hair dryers.

> would make their newer aarchitectures look not so hot in comparison.

their newer architectures don't reach 95*c. They already dont look so hot.

wood screws?

That's true for all AMD cards past and present. Guess it's the hardware scheduler's fault.
t. ex-AMD user

I dunno, the maxwell titan gets close. (note thats the vram modules, not the vrms)

I'm actually using Nvidia and i care about Async. At least AMD are trying to implement new technology even when they had been screwed almost illegally by Nvidia and Intel for years and they are poor as fuck. The only new shit we have is the epic G-Sync tecnology that isn't better than Freesync, gAmEwOrKs and that new thing Nvidia made to take screenshots. Meaningless technology.

Fun fact: there are over 100 monitors that support freesync. I do not know now many gsunc screens there are but i'm willing to bet its 50 or under.

yes, those were extremely helpful

So what?

>he thinks its healthy to run vram modules at 100c+

hey, it's probably designed so it barely lasts until the warranty runs out. after that it doesn't matter if it breaks.

Thing is, on such a "premium" product Nvidia cheap out on fucking everything. It would've cost them very little to slap some thermal pads and a backplate on the thing and that would go a loooong way to cooling those rear mounted memory chips.

Thats the worst thing, they barely support their shit even when they are the current top dog. I'm seriously considering to buy AMD for my next build in 2 years (using a gtx 760 currently) if they continue with these microsoft-tier policies

>he thinks its healthy to run vram modules at 100c+
Sure, why not? What is their maximum operating temperature?

By the way, all this talk about dx12 is useless if i'm not playing videogames right?

More or less yes.

maybe with the shitty default fan profile that keeps it at 20% even under load, this wouldn't happen with a system that has good airflow and a properly set up fan profile.

>screaming fan because Nvidia refuses to put a good cooler on a gaming card

The only good titan ever made is the cheeky gigabyte titan black that shipped with the windforce cooler in the box along with instructions on how to fit it.

the titan coolers are fine, they're vapor chamber designs which are a lot better than the conventional blowers like the one used on the 1070/1060 fe, AMD reference cards, etc.

>GTX 480
We talking AMD or NVidia her?

>GTX

Gee I dunno user.

>480
Gee I dunno user.

even if this is true, nvidia has been using the same fucking strategy since the beginning

Fun fact 90+% of 144 fps freesync monitors only support freesync up to 90 fps because it is dogshit technology.

[citation needed]

My friend says AMD cards get exponentially hotter than Nvidias. I tried to probe him wrong, but couldn't find facts to back me up. Is that just a Sup Forums meme?

Im having a really hard time choosing a side to bait.

I agree, you clearly do not know.

So we base on the prefix now huh?

>$200 gpu vs $1200 gpu

nice meme image

it r a fact

www.pcper.com/news/Displays/ASUS-MG279Q-144-Hz-Display-Caps-90-Hz-FreeSync
wccftech.com/amd-freesync-hack-expands-refresh-rate-range
>However there are also limitations to FreeSync as there’s a minimum and maximum refresh rate the monitor can deal with. For example 30Hz to 60Hz, in this case the monitor can draw frames at a framerate between 30 and 60 instantly with no stutter, lag or tear. However if the framerate exceeds 60 or goes below 30, the monitor reverts back to a fixed refresh rate.
www.pcper.com/reviews/Displays/AMD-FreeSync-First-Impressions-and-Technical-Discussion/Gaming-Experience-FreeSync-
>Let’s talk about that first: the 48-75 Hz range, with the 75 Hz being the limit of this panel,
>When running games over the 75 Hz maximum refresh rate you have the option to enable or disable VSync
Its shit

Would someone mind saying this in simpleton terms?

I'm reading into hardware but this is too next level. I just don't understand what hardware asynchronous compute is...

I still don't know what's better, a 8gb rx480 or a 6gb gtx 1060. I see them literally the same price but there are shills everywhere. It's maddening

>hardware async compute
Marketing buzzword.
It means amd sponsored games will see a 5% performance boost on amd cards.

Nvidia housefire meme comes from the GTX 480 days. Ever since 500+ series AMD has been the housefire tier cards.

Since the 6xx series, 5xx where hot too. But to be honest, all is a meme, both brands are hot as fuck for my taste.

I am an AMD user here. I purchased a RX 480 (MSI Gaming X though) over a GTX 1060 variant simply because it was £30 cheaper here in blighty.

But let's face it. Nvidia does DX11 and OpenCL better. Their driver is fully optimized to utilize DX11 and OpenCL. AMD just could never get the same performance until sometimes months later while Nvidia was as fast as it could be right out of the gate because it takes longer to optimize for their architecture (see below).

That's because Nvidia owns most of the technology patents that work well with DX11 and OpenCL and AMD being late comers had to use other methods and reverse engineer that shit or face time in court fighting off lawsuits.

AMD tries to overcome these license issues by using brute force. AMD hardware is more powerful than Nvidia on paper. But because of the way it has to operate and not being able to use Nvidia's methods it runs hotter and uses more power whilst having a hard time matching team green's performance.

This is simple fact.

It's not because 'AMD are poo in loo shit tier'. It's because they are hand tied by history in the GPU world as a competitor. It's simple economics and the law of who got to the patent office first.

Most games operate within a certain range whether you are using a shit tier RX 460 or a blasting NuTitan X. In the instance of the RX 460, if it drops below the Freesync range the driver will run low framerate control to smooth things over. If running a NuTitan X you are more likely not going to notice when running at super high framerates any tearing once it goes past the Freesync limit anyhow. The only issue is the way Freesync works with certain panels. IPS traditionally is a lot harder to get a wide range. You will probably have to select a range manually depending on your GPU and what games and settings you will be using. G-Sync deals with that shit for you but you pay a premium for it (A $200 premium). BTW with Freesync you generally wan to leave vsync off. With G-Sync you leave it on.

As it should be.
The retards bought Ati behind Nvidias back.

Stop painting them as some industry good guys fucked by patents.

1. Look at what you are playing on. Is your system low spec? AMD GPU's can be CPU bound. If you are using some shitty old AMD or Intel CPU get the GTX 1060. Also think about what you will be using it for. What games you are likely to be playing the most. Newest triple A console titles? Probably the RX 480. Older games and lots DXxx titles? GTX 1060. Worried about power usage? 1060. Want a cheaper Freesync monitor? 480. Weight it all up. They both have pro's and cons.

>Nvidia does DX11 and OpenCL better
OpenCL has always performed better on AMD cards, if it works, you are probably thinking of CUDA
Also, Nvidia is the company that was late to OpenCL, AMD dropped ATi Stream as soon as the first OpenCL spec was finalized meanwhile Nvidia kept pursuing CUDA, and still pursues CUDA

I am not. I am just stating that that is how the industry works. But without competition Nvidia would have no reason to innovate and it would cost a buttload more. Sure they might get broken up as anti-competitive but that's besides the point.

The plus side is as DX12 and Vulkan come along AMD do offer an alternative and competitive product for a cheaper price. But only if devs code their shit to take advantage of the hardware.

sapphire never included much with their cards.

from my 6950 to my 390's, the boxes where always bare bone.

DX12 and Vulkan allow queuing graphics and compute tasks from multiple CPU threads, into multiple GPU command queues.
As I understand a game could use it to e.g. have ambient occlusion or postprocessing running while it's drawing other stuff, so the GPU never runs out of things to do even if the scene rendering stalls for a bit on the CPU side.
While hardware from both AMD and Nvidia are perfectly capable of supporting this, AMD seems to benefit much more from using it; their current cards are much faster at switching between task contexts, or something.
And if past compute and game benchmarks are any indication, their cards _do_ have plenty of computing power; they just can't seem to get it all working on a single complicated task because of some bottleneck somewhere.
Because the technique only benefits AMD at the moment, any game hyping it is probably heavily AMD optimized to begin with. Nvidia can probably catch up a bit with driver hacks, but as efficient as their current hardware is, it may be weaker in this department.

>They both have pro's and cons.

>pro: cool Nvidia logo
>con: gimped performance inside of 2 years

>switching between task contexts, or something.

A context switch on every Nvidia architecture requires a full pipeline flush and that has an enormous latency penalty (Nvidia explicitly advise against doing lots of context switches in their programming guide). Pasca l is (essentially) better at predicting these context switches and balancing load to fit.

GCN on the other hand gives absolutely zero fucks about a context switch and takes virtually no penalty doing so so switching between compute and graphics is latency free for AMD and ties into their usage of async compute.

tl;dr AMD benefits from hyperthreading for gpus, Nvidia does not.

Older cpu, i5 2500. Both newer and older games. Likely sticking with 1080p at least a few more years

This, both brands hit up their turbos and smash into 80degs for their stock cooling these days.

>tiny market share represented by AMD

AMD has more than 70% market share. All consoles are running GCN based APUs.

Its a meme.
On non reference coolers they are pretty much the same. AMD reference coolers are shit but not even OEMs use AMD reference coolers.

7970 GHZ and 780ti owner here.

Regretting that 780ti upgrade...