Why is Nvidia so shit?
Why is Nvidia so shit?
Other urls found in this thread:
twitter.com
wccftech.com
developer.nvidia.com
wccftech.com
twitter.com
...
>higher is better
are you okay user?
Because as soon as a new gen of cards is released they drop driver optimization for older gen GPU's.
still rocking that 770, babyyyyyyyyy
>tfw 1070
I've been saying all along that AMD was going to destroy Nvidia with the new APIs. Nvidia is going to have to make a much more general-purpose GPU to be able to keep up, and if they want to also stick to their guns like ludicrously high tessellation they'll have to seriously raise their prices.
>R7 200 series
what the fuck, how is the RX 480 only 15 fps behind a gtx 1070?
>yfw when you have 512MB ATI AMD Radeon HD 7560D
kill me , i can barely play minecraft
I've got a 970 and am still happy with it, but damn the Hawaii cards are impressive for how well they are still performing
AMD still supports some of its older-generation cards.
Looks like Nvidia is already gimping Pascal
>they still list the 7870
Haha, I can still delude myself that my card is relevant!
go get a cheap 7970 ghz
literally the best card for poor fags right now
It is. AMD cards tend to hold up better throughout the years than NVIDIA cards.
I have a laptop with a 670mx
Fight me.
Im running a Phenom 9650, my card is already bottlenecked
Thanks for the tip
The card is a tank, it might not always look the best with newer games, but it'll sure as shit run them perfectly if they are gpu centric
Nvidia bet the horse and kart on DX11 method of handling on the GPU, where everything is managed and optimized software side. AMD bet the horse and the kart on everything managed by the hardware with the developer in the driver's seat.
The industry agreed. DX12 & Vulkan are both designed from the core of AMD's Mantle API, not from Nvidia's CUDA or any other API philosophy. So while Nvidia can deliver very high framerates in DX11 and even in some cases in DX12, that's it; they're high framerates. What about minimum framerate? What about frametime?
On top of that, Nvidia hardware doesn't have a hardware scheduler on die, it's all handled by software. So when it comes to single or multi-gpu implementation in DX12, there are negligible performance improvements. But on AMD side, with Async Compute utilized properly, there's significant gains; and with dual GPUs you see upwards of 30-40% improvement in performance.
>The top 5 cards are made by Nvidia
>Nvidia is shit
Pick one.
Also helps that most console ports will favour AMD due to them being the main GPU provider for the PS4 and Xbone, so devs have already optimised the engine to perform on their GPUs.
Why does this game run worse than BF1 despite looking nowhere near as good?
You do realize that they're ordered by the models right? The GTX 980Ti and the R9 FuryX have identical maximum and average framerates at 1080p in DX12.
It makes it seem like the 980Ti is superior to the FuryX, when in fact, they're the exact same.
That said, DX12 implementations across every title to date, has been complete shit; because they're patched in rather than implemented from the ground up. A GOOD example of Async Compute, which is the bulk of what DX12 IS, is Doom 2016 which utilizes Vulkan for it's render path.
Go look at how the Titan X, 1080/70/60 and other Nvidia cards perform compared to how AMD cards do between OGL 4.3 and Vulkan 1.1.
Any developer that utilizes the API to it's strengths from the ground up, in a game, is a better benchmark for respective competing GPU hardware, than any other game in the market that patches in DX12 support AFTER the fact.
>You do realize that they're ordered by the models right?
They're ordered by performance m8.
Are you blind?
Dice has really perfected the Frostbite engine, it runs great overall on lots of hardware configurations
If every game was as well optimized in Vulkan as Doom is Nvidia would shit the bed.
Why did they even bother putting in multi-GPU for the 1060, it doesn't even support SLI
>my 390 maxes out D44M @ 1440p 60 FPS effortlessly
>ordered by performance
>980Ti has the same performance as the R9 FuryX
>980Ti therefore is superior as the R9 FuryX
Stop spewing autism you faggot.
>290x still that high
>nvidia cards released at the same time are falling behind
fucking how
such bullshit
Probably to drive the point home.
>all cost twice or more as much
You have the 480 beating the 980 and around 15fps off the 980Ti/1070. You could probably crossfire and reach 1080 levels if AMD wasn't so shitty at drivers.
salt in the wound.
>nviditears
my aged as fuck GTX 780 @ 1.2GHz (OC) can get me nearly 54fps consistent at 1080p maxed out (minus collision particles which utilize Async Compute, which the 780 cannot do--due to the fact that there's no hardware in die to handle it at, and the software shits the bed).
I'm waiting for AMD to release the R9 580 / Fury X II cards (Vega 10 respectively), before I upgrade. So that there's some really beefy Async Capable hardware fed by HBM2 probably giving ups anywhere from 750GB/s to 1TB/s bandwidth.
Why is the 290 better than the 380x?
Does the numerical system not apply there?
Mix between Nvidia gimping older cards and AMD having highly unoptimised drivers at a cards launch. This allows AMD to keep up as they keep getting more performance out of the card over time, while Nvidia has reduced performance over time.
It's just an all-around well-optimized game but it really showcases the brute force of AMD.
Because a 290 is an old 390 which is faster than the 380x
>tfw bought a 390x over a year ago and it's now performing better than a 980
See: Nvidia GPUs since the GTX 680 years, no longer have a hardware scheduler on die. As a result, Async requests are handled by the software--and while the software can do them, there are limitations to how many queues Nvidia can handle simultaneously before performance shits the bed. Up to the GTX Titan X (Pascal version), Nvidia can handle 32 command queues. 1 master, 31 sub.
Meanwhile, because AMD cards have had a hardware scheduler on die since basically the 7xxx series cards, and they've improved that over the hears, AMD GPUs can handle hundreds of command queues if the developer needs that.
Source: twitter.com
Benefit of Async Compute when utilized properly: wccftech.com
Ty man
Doesn't explain the difference under D3D11.
>Titanfall 2
Kek
Which is ultra and which is medium?
really makes you think
Is the bottom ultra? The shadow on the vents is darker so the setting for that is probably higher.
I would say the same as there is also a slight bit more fog/dust in the centre background near the rear vent.
The irony is that Nvidia brute forces the solution, while AMD implements a fuckton of hardware on their cards that tends to make the GPU run hotter than Nvidia's own implementation, but as a result, lets the GPU make significant gains when all hardware is fully utilized.
One of the reasons why Nvidia is still not implementing Async Compute & Hardware level schedulers into their GPUs, is because if they do, power draw and heat will skyrocket. Right now, though they're packing less shaders per SIMD and per die than AMD, their shaders are fatter; larger.
However, the hardware scheduler and other architectural designs to properly implement async would take up a considerably greater amount of space on die. Which means they'll have to either reduce the size of their shaders and increase the count to make up for it, or keep the size of their shaders the same and reduce the number of shaders to accommodate the scheduler and supporting hardware on GPU die.
Either way, performance will drop to near AMD levels or perhaps even lower, while power and heat increasing beyond their current envelope of sub 200Ws for their high end & sub 150W for their middle to low.
That's why, in likelihood, even the next Nvidia GPU, with Volta, won't see an Async Compute implementation even at 16/14nm fabrication process. They will very likely wait until either 11 or 9nm, before they attempt to reintegrate a hardware scheduler for Async Compute, to have their cake of fat shaders w/ supporting DX12/Vulkan hardware level capable stuff, and eat it too with similar power/heat metrics of the GTX 1070/1060 or lower.
AMD on the other hand distributes the loads across syncrhonous & asynchronous compute. So, while the hardware does run hotter & draw more power, everything is being fully utilized to deliver frames as fast as possible on screen. It's why their max framerate is always lower than Nvidia's, but their average framerate is more consistent. R9 290 & 480x have 2-3fps diff on avg.
This here is exactly the reason why Nvidia is trying to rush out Volta next year. The fuck-up is already noticable and they need to adress it before DX11 becomes obsolete and DX12 goes mainstream. The original plan was to let Pascal run for 2 years and release Volta in 2018 but seeing how AMD is kicking their asses in DX12 they are going into full blown panic.
Nvidia handles everything as much as possible on the software side, this means they can tune their drivers to maximize how their cards behave under specific conditions (aka drivers are tuned on a game by game basis). As such, in DX11, Nvidia cards > AMD cards.
In DX12, the responsibility of performance is instead passed onto the developer. AMD's responsibility is in improving its drivers, but their drivers mainly focus on reducing any potential bottlenecks in the minimal overhead nature of DX12 and API. So if there's any performance fuckups, it's mainly because the developer is a moron. As such, in DX12/Vulkan, AMD sees massive gains, while Nvidia sees anywhere from 1-3% gains on average--which is considered negligible.
You still haven't addressed the original question.
I'm pretty sure I have, but since you're not satisfied, ask it again and phrase it differently so I can address it. Please.
x still that high
>>nvidia cards released at the same time are falling behind
>fucking how
>such bullshit
First you go off about D3D12 and then you say that Nvidia > AMD in D3D11 in general.
Not my question, by the way.
It is just a shame AMD has such shitty drivers for DX11 and OGL. The Digital Foundry video for benchmarking Vulkan on Doom showed how gimped in comparison to Nvidia they are on OGL and how Vulkan allowed the hardware to be utilised properly.
Luckily for the future we will have far more DX12 and Vulkan shit coming out so it matters less.
Except it doesn't. 980 =/= Nano (with the only difference being that the 980 can actually Overclock unlike AMDshit)
Yeah, already addressed that
here: here: here: here: here: AND
here: I've written enough that you SHOULD be able to reasonably follow along in understanding why Nvidia cards are doing poorer while AMD cards are getting better.
If you still can't understand it, ask someone else.
Look at the difference with 1440p.
Yes you can overclock the 980 but i have a 390x so that bumps up the performance
The issue is that even if AMD had excellent drivers for DX11 and OGL 4.3, anywhere from 25-30% of their GPU would sit idly, because it was always designed from the perspective that developers would utilize command queues and other low-level CONSOLE-LIKE development paths for their GPUs.
AMD didn't have the money to bet on designing initiatives like Nvidia with it's TWIMTBP platform, specifically all these implementations: developer.nvidia.com
That's not to say that AMD didn't try, but they couldn't throw money at the problem the same way Nvidia could brute force their victory with $$. So AMD focused as much of it's R&D into improving their architecture for the inevitable philosophical shift between DX11, which prioritized a top-down software based implementation, and DX12; which prioritized a bottom-up hardware based implementation with HOW code is handled, managed, optimized, and executed by the respective hardware.
So while AMD hardware simply could not deliver the very high framerates that Nvidia could, partially due to Nvidia software being utilized in a game with negligible visual improvement--and often at the expense of their own, older generation hardware: Crysis 2, The Witcher 3, Fallout 4, etc., AMD did manage to at the least maintain a consistent minimum framerate that was generally over 30fps. While 30fps isn't considered playable by PC standards, it is by console standards, which does make a significant portion of the industry.
All that said, I hope Vulkan becomes the defacto standard for game creation moving forward, with all major engines: UE4, CryEngine 5, Unity 5 supporting the API for all Windows and Linux platforms. Especially, since it's implementation of Async Compute is considerably superior to the implementation present in DX12.
Neither are 1440p cards.
The 980 is better than the 390x. The 980 is arguably better than the Fury X because it can overclock far and away above what the Fury X can overclock to.
Too bad AMD will never be relevant in the PC gaming market
>mfw still rocking 290 Sapphire
Feels good.
>980 is better than a 390x
>The R9 470X performs on par with it in Titanfall 2 at 1080p maxed out in DX12
Nig, the 470X is an entry level GPU going toe to toe with a last generation flagship and matching in performance. Are you crazy?
>relevant
>wccftech.com
>went from 26.2% to 34.2%
>gained 8% in 1.5 years
If Nvidia doesn't implement hardware schedulers and supporting technologies ON die with Volta, that 34.2% is going to jump up to 40% and then 50% until they do. DX12 and Vulkan, both, are a serious threat to their bottom line; and the more they put it off, the more they'll suffer.
Finally, 34.2% is a significant degree of relevance in the gaming market. Don't be ridiculous.
>higher is better
That's subjective buddy, stop stating your opinions as facts.
>Nig, the 470X is an entry level GPU going toe to toe with a last generation flagship and matching in performance. Are you crazy?
See: >)
I just gave you a 15 game average, take your cherry picked benches and ram em up yer ass kunt
>AMD
Nobody cares m8
>AMDcuck makes thread
>proceeds to samefag thread with huge copypastas
>nobody really pays attention
>samefag replies to his own posts
Why does this always happen?
Why are all of these GPU war threads almost exclusively made by AMDfags?
>argument is about the 980
>post image not involving the 980
>instead has the 980Ti
>moving goal posts
Way to be a nigger.
You're buttblasted enough to reply.
buyers remorse and insecurity
But he's right.
see
Shills. Just Report + Sage
My 1050 Ti seems to hold up okay. I can turn down some pointless settings to get 60fps.
My argument was intended to show you that cherry picking can do anything. Obviously the 1060 isn't better than a fucking Fury X.
See: for a 15 game average
The 980 is definitely better than a 390x
>960 almost 10 frames faster than 760 even tho they are almost the exact same PCB and GPU chip
pls stop posting this lie on the internet
>calling me an AMDcuck
My sides are in orbit, you, dumb fuck.
Buyers remorse primarily.
I used to own a 280x and had a whole folder of "NVIDIA BTFO" posts. Then I got a 980 ($200 brand new) and haven't looked back since.
When your company is losing, you feel the need to defend them. See how defensive Nintendrones are relative to Sonyggers
you don't get to call him out, at least he's making arguments instead of engaging on the same dumb brand war
The 960 and 760 aren't remotely the same chip
holy fuck i really cant tell the difference
But user, I have a 280x and even I know that Nvidia usually holds the advantage.
Unless it's Kepler of course, my 280x stomps the 770 despite originally performaning worse.
>change the printed text on the chip
>add 2 stream cores
>raise the price back to 350$
>"hey guys look the gtx960 is totally new"
Nice try OP
top has 1% more fog/steam
bottom has 1% more filmgrain
please tell us
i would post the youtube link but i wont for obvious reasons ; my nvidia control panel no longer lets me change my global anti aliasing settings ; my next gpu will be an rx480
Top is medium, bottom is ultra.
>tfw can hit 60fps on almost everything with my overclocked 960 on high
after rebates for the 4gb version it was only $150 from fry's electronics of all places
Yeah no they dont so go back to gaf you dumb cunt
If you're gonna switch, wait for Vega which comes out Q2 next year. You'll be able to get a GPU better than the R9 480X for similar prices.
also fogot to mention i dont see a point in upgrading when my gtx760 does rocket league in 1440p 144fps with all the shit console graphics like motion blur and dof turned off
Do you have a 144Hz monitor?
i have a benq xl series 144hz 1440p 27 inch master race gaming monitor
They do.
if i use anything but the 2 year old drivers for my nvidia gpu then i risk blowing up the vram or losing 30 fps in any video game i play - dark souls 3 becomes un playable at lowest settings 1080p and drops to 20 fps.
how
my 750 ran DS3 perfectly
you're being informative to people who are brand loyal.
no matter how well you explain, all he understands is his buyers regret and blind fanboyism fighting each other in his tiny head.
my friend as a gtx580 which is leagues above a 750ti and if he uses the newest drivers he gets 12 fps in dark souls 3 - nvidia has garbage drivers - get over it
the 750 series technically belongs in the 900 series architecture, where nvidia doenst gimp the hardware via software (soon they will tho, because no ones fucking buying a 1080 at 1000$)
>amd posters explain in well built and easy to read posts why the amd cards are beating the nvidia cards
>nvidia posters basically post "LOL AMD ISNT RELEVANT!" "LOL AMD SUCKS"
and this is why nvidia is dying.
The 580 is Fermi and not better than the 750 Ti
it runs circles around the 750ti in any game that isnt gimped by nvidia drivers like dark souls 3.