How come AMD's hardware specs are always superior (bandwidth, FLOPS, texel fill rate...

How come AMD's hardware specs are always superior (bandwidth, FLOPS, texel fill rate, etc) but the cards either end up being on par or slower than Nvidia?

Other urls found in this thread:

videocardz.com/60561/amd-shows-off-radeon-rx-480-running-doom-at-1440p144hz
techpowerup.com/134460/disable-geforce-gtx-580-power-throttling-using-gpu-z
en.wikipedia.org/wiki/Denial
twitter.com/NSFWRedditImage

nvidia pays devs to sabotage AMD

Three reasons.

1. Games don't push AMD's strengths, that being high bandwidth, massive shader and texel throughput, but opt for pixel fill/throughput which is Nvidia's forte, and pretty much the only objectively measurable scale Nvidia beats AMD at.
2. goes hand-in-hand with 1.
3. Their drivers have higher overhead than Nvidia's, meaning they lose out performance from the start.

In short, it's entirely a software problem.

>but the cards either end up being on par or slower than Nvidia
Just wait a couple of years when nVidia stops optimizing drivers for that arcitecture.
>Fuckin Kepler.

AMD is just plain shit.

FLOPS is a theoretical figure that has no bearing on real world performance. The calculation is :

ALUs x clock x operations per clock
IE 4096 x 1050mh x 2 = 8.6 TFLOPS

Bandwidth itself is analogous to how wide a highway is. If there is no traffic using the road, then all the additional lanes do nothing. One car traveling down an 8 lane road is not 8 times faster than if it were traveling down a single lane road. Peak bandwidth figures are very rarely reached in real workloads, the memory controller itself can be slow enough to have a measurable performance impact in certain ops.

Texel and Pixel fill rates are again not directly tied to any real world performance benchmark, they're a small part of a real world rendering workload. If you're dealing with a big scene and you're ROP limited then you might see where this would come into play.

Hardware is not so simple as to be reduced to a few bullet point numbers. Every GPU arch has its nuances, and they're so numerous that they fill entire programming guides for every revision.

Also I forgot to add, Nvidia is faster at peak
bilinear filtering for fp16, but not for int8(the scale might show up as gtexel/s in some tools, but I haven't seen the measure used in a long while)

>just wait a couple of years

The never ending song of the AMDCucks

AMD wins more at raw compute than in games, and they don't have their drivers as optimized, it's why they're pushing DX12 so hard and why their cards do much better under DX12

>How come AMD's hardware specs are always superior
Their chips are denser, more transistors per mm2, that's where the higher numbers come from.

>but the cards either end up being on par or slower than Nvidia?
Worse software.

All you have to do is look where the 7950/70 started and where it ended

The 7970 was no competition to the 780 ti, now its viseversa, high end nvidia gpu's are not a good investment.

Fermi gpu's commit suicide

Kepler got gimped

Only time can tell what maxwell will suffer.

I heard that the 980 ti and the titan x already got slightly gimped after pascal's release.

Maxwell already suffers it lacks DX12 support

I can't fucking wait for polaris 10 this shit is taking too long

Yea can confirm, drivers are more stable but gimped

Did you notice benchmarks:

>no sli 980 or 1080

980: impact on 1070 market position perf
1080: dual gpu min frame rate is the same as single gpu

There's a major issue on sli where the min frame rate is bottlenecked to single performance

4k results are very similar to maxwell all things considered

>relative perf multipliers

I can't wait to hear how AMD's benchmarks blow Nvidia's out of the water but all those benchmark websites mysteriously disappear from the domain registry and no YouTuber has benchmarks showing AMD is better

It's almost as if there's a conspiracy. Or AMD is put in place by Nvidia and Intel to be second-class and avoid monopoly fines (in other words it sucks)

Polaris 10 doesn't compete with nvidia stuff, it has a chance with DX12, but it's just the 480/X most likely,

AMD is waiting on HBM2 for Vega, and probably had to wait for GDDR5X on the 490/X

>Polaris
>videocardz.com/60561/amd-shows-off-radeon-rx-480-running-doom-at-1440p144hz
No chance it's competing with the 1070.

>doom

Why the fuck are they showcasing it with a game from 1993 that ran in total software mode?

It shouldn't be, the x80 part is meant to compete with the x60 part from Nvidia.

Are you not aware of the fact there was a new Doom released very recently?

I believe he is using wit.

> 300W
> Passive
wut

Server cases have huge amounts of airflow so graphics cards don't each need their own fan.

I'm glad delusional faggots will stop trying to claim that Polaris 10 is going to be the 490X now. That was just unbearably stupid.

Server hardware. Think about it.

firepro cards meant to be used in thin server rack mount cases that have their own fans designed for whole case airflow. even the CPUs in those are passively cooled

this stuff goes into servers or shit with propper case cooling

Big enough cabinets are actually climate controlled. They're hooked right up to the build's AC. I've seen plenty that have their own dedicated units.
Thats a big part of why HPC is so costly.

Off the top of my head, their architecture comes to mind.

Fury X has 4096 stream processors, 980ti has 2816 CUDA cores. CUDA cores tend to be more powerful than stream processors (note this is pretty much both marketing terms for a graphics processing core), whereas stream processors are more plentiful.

980ti has 96 ROPs, Fury X has 64.
980ti has 176 texture units. Fury X has 256.
etc.

Ah, all right. But why would a server need such a robust videocard?

AMD is hot garbage always fucking shit m8

render server
compute server
anything you don't want to do on your workstation that can use parallel computations

certain problems scale really well on a GPU architecture -- think of a GPU as thousands of really shitty CPU cores.

>Big enough cabinets are actually climate controlled. They're hooked right up to the build's AC.
I've worked with an entire server room that was hooked up straight to the AC. Was always fun opening and closing the door to the room because the pressure difference tried keeping it closed.

>running-doom-at-1440p144hz
Isn't that a good thing?

>purchase nvidia for longevity

many large buildings have full sized systems just for server room cooling, as it is more important to keep that system up on its own in an emergency than the whole building's AC

500 series card didnt get gimped, they litterally killed themselves

yes


Also should i wait for the 1060 or polaris?

I always buy the X60 ti from nvidia or the x80x/x870 from amd, those are always the perf sweetspot, but do you think that the 1060 wont suck at compute (i mean tflops compared to 480x)?

That's nonsense because even the 1080 can't run it at 90FPS

Okay.

>500 series card didnt get gimped
True, the 580 was the last *80 card that didn't have any artificial power limit or any missing pieces on the PCB.

Its a 144hz freesync monitor, the GPU is not running the game at 144hz.
Learn to read.

I bought a 670, I won't fall for the nvidia meme ever again.

It did have a hardware power limiter to prevent it from frying itself.
techpowerup.com/134460/disable-geforce-gtx-580-power-throttling-using-gpu-z

My 560 doesnt think so.

Whoops, was it the 480 the last one without the power limiter then?

Haha filthy binned peasant, only the glorious GF100 is allowed to have that.

GF110*

Whoops.
GF100 was regular Fermi

To my understanding every AMD card from the last (pre-Pascal generation) is faster than their nVidia Maxwell equivalent except Titan X (that loses to that monster 295x2 card or something)

Before you call me AMD fanboy I have owned two Titan X and currently own two 1080s.

And heck, even in bunch of notable benchmarks 'non-equivalent' AMD cards beat up their 'superior' nVidia cards.

I guess nVidia stays afloat because of retards like me.

No, Fiji is still faster than GM200. On paper, at least.

That's a hell of a thing if it applies outside theory.

Yeah, and the spectacular inferno that was GF100 is why the second generation of Fermi onwards had the power limiter.

I've applied my own theory to AMD/NVidia unreleased cards.

Nvidia needs 20% less TFLOPS than AMD for equal performance.
This is in DX11, might change with some lower overhead API.

Tflops wise amd's offerings are supperior, but that doesnt matter in gaming, just in dx12 titles where async matters too.

These new "use all the fucking system resources available" API's are pretty cool, old xeons and the fx lineup shine and even manage to beat current intel offerings.

The 480 was like 100C from factory though and wasn't it the 580s that literally caught fire?

480, 470, and 465 (all GF100 with various units disabled) spawned the Thermi memes, but I think it was actually the 590s that had the "burst into smoke and flames" issue. I know there was that one rather well publicized event where the guys were testing a 590, and when power was re-applied immediately following a reboot, the damn thing went pop.

There's also that one GIF floating around of the video card (I think its an MSI card) going up so fast and hot it melted the solder for the PCI-E power connectors (~220 degrees C) nearly instantly.

why are you running the 6700k at 4.0? at 4.3 it barely makes a temperature/voltge difference

I still have a 480 in my system (alongside a 970) but it doesn't do anything at the moment. Just warms up the water cooling

I have had no reason to OC it yet.

I would also like to add that for the GF100 Fermi, one site got their hands on a GF100 card where the last shader cluster was disabled via bios instead of fused off. They re-enabled the last cluster so the GPU was running full fat 512 shaders, and put a fuckhuge Arctic Cooling triple fan heatsink on it.
Even with that heatsink, activating the last shader cluster increased load temperatures to 97C and doubled card power to 600W. Whether it was because it required a massive voltage increase or what, the fact of the matter is that none of the consumer silicon could safely run full fat 512 shaders.

Why waste the energy running and cooling the thing? There are plenty of better options for a secondary GPU.

>I would also like to add that for the GF100 Fermi, one site got their hands on a GF100 card where the last shader cluster was disabled via bios instead of fused off. They re-enabled the last cluster so the GPU was running full fat 512 shaders, and put a fuckhuge Arctic Cooling triple fan heatsink on it.
>Even with that heatsink, activating the last shader cluster increased load temperatures to 97C and doubled card power to 600W. Whether it was because it required a massive voltage increase or what, the fact of the matter is that none of the consumer silicon could safely run full fat 512 shaders.

Can I have the article? Sounds interesting.

Just google gtx480 512 SP

Holy fucking shit HOW? I have two 1080s and they pull below 400 during games, jesus fucking christ.

>HOW?
Fermi is Nvidia's netburst.

The original article I think has been deleted (it has been a while) but has the gist of it.

The GF100 silicon was really really REALLY fucking bad. 1.7% yields on average (which ended up adding to the whole pile of shit we gave nvidia). Some of the finished wafers were so bad NONE of the chips on them were viable, which is like $40k-50k in hardware per wafer.

Nvidia tried to do too much too fast and when TSMC fucked up they got punished for it, which wasnt helped by the fact AMD had dropped the Radeon HD 5000 series which were superior to Fermi in many ways.

Ridiculous, lmao. I knew Fermi was a shit but I didn't know the level of shit it actually was.

Even with how bad Thermi was, AMD still barely outsold it with its far superior and cheaper Evergreen architecture.

Just goes to show you that you need far more than a good chip, marketing and brand name are 70% of sales.

Sadly this is true.

The most profitable companies invest more in marketing than in R&D

The architecture itself wasnt that bad. It was the silicon it was on that was absolutely terrible, because TSMC a shit. GF104 and the smaller dies were far better about the heat to the point where a pair of GTX 460s were the go-to cards over a single GTX 480 due to how damn well they scaled.
When the GTX 500 series came out, Nvidia had managed to fix the bulk of the problems with Fermi (some of which included disabling most of the FP64 performance on the consumer cards), TSMC had fixed their shitty process, and the series was generally better received.

Part of the issue was that Fermi was hot-clocked (the shader domain ran at 2x the speed of the rest of the die), which means the shaders needed more voltage and thus more power to run.
Nvidia eventually did away with hotclocking entirely in Kepler, where they doubled the number of shaders to counter the halving of the clock speed, then doubled again to get more performance. They also almost completely removed the FP64 hardware, leaving a token amount for the things that really needed it, and the FP64 hardware is really power hungry.

Thanks for the information, I guess. Are you a GPU historian?

Not really, I just read a whole bunch of articles and shit when I was considering an upgrade to a GTX 580, and when I instead acquired a used GTX 660, I did some reading on that as well.

I've also been putting some thought into bouncing over to the red team, as GCN as an architecture has aged really fucking well.

Heh, I was considering AMD before buying two 1080s, but these were quite cheap here so why not.

In retrospect, buying two Titan Xs in last generation was bit waste, should've gone for high end AMD dual setup like dual Fury X.
Hopefully AMD comes up with strong competition, it is healthy.

Well, we'll just have to wait and see. Hopefully Polaris proves to be a solid design, cause if not Nvidia is going to shove that thorny capitalist cock right up our asses without lube.

I kinda want a Fury Nano though. A FuryX wont fit in my rig because I dont have a space to put the rad(s), and I likely wont have the power plugs required for a Fury, or the space since those cards are huge.

they aren't going to use gddr5x at all.

With those narrow memory buses they sure are.

Plain old gddr5 or what?

GDDR5

nvidia doesn't have true dx12 support, that also means vulcan isn't fully supported, they can run those, but they gain little from it

meanwhile amd has dx12 support that improves performance meaningfully.

without seeing the 1060, i would say is likely going to be amds game, however nvidia has been known to cut a higher end card down to make it price competitive with amd or just out right better (see fury x and 980ti) so there is a chance that if the 480 is kicking nvidia's ass to hard, they will price adjust/cut down a higher end card and sell it competitively price wise.

but i despise nvidia's business practices ontop of more petty reasons, so i would say amd even if it wasn't obvious amd won this generation of gpus.

amd is shit and you are looking for rationalizations to help with your denial.

see: en.wikipedia.org/wiki/Denial

Not necessarily, narrow busses can be offset with fast memory or good cache management/large caches

Nvidia buys out devs, with the ones they don't buy out optimizing more on nvidia due to the higher market share (due the higher performance from buying out devs)

It's a cycle of anti-consumer bullshit.

if you are talking about consumer cards

1) you got nvidia gimping amd by getting devs to use their render paths instead of a natural one.

2) driver overhead on amds part

3) nvidia pushing retarded levels of things it does well (see tessellation)

4) no 20nm node, what the fury was made for, and due to it being re purposed for 28nm there isn't enough bandwidth to properly use all the cores.

now when you are talking about pro cards, this goes out the window, as they are signed drivers tested to work on pro programs, guaranteed to not crash and if they do you get dev time to fix the issue. here its less an issue of how fast the card is and more how much memory the card has and amd usually comes out ahead, the only reason to use nvidia was because they were first to gpgpu with cuda and allot of shit uses it.

Also this.
The 7970/280, gained something like 20% performance from drivers over the years.

I fucking hope the same gains will make it to my 380 but I doubt it.

You are now imagining how long it would take to encode, say, 4 hours of 8K video content with a normal workstation.

Or you could have access to 64 of those cards with a much quieter, more energy efficient workstation and have it be done in seconds.

i forget the reasoning, but its mostly because its at best a stop gap. nvidia likely wont put hbm2 on consumer cards this gen, but amd will on vega, and this is where the memory limits would become an issue too.

why use expensive gddr5x when gddr5 gets the job done but isn't fast enough to compete with hbm2?

on the cards that don't need the extra memory bandwidth, gddr5, on the cards that do, hbm2

>have it be done in seconds

I'll be the first to admit that I don't have a clue what to do with it but that makes my dick diamonds.

Won't that split the mid and high-end market massively in terms of performance?

...

Imagine the deliciousness, you could fill up your harddrives with 5gb x265 4K rips with excellent quality in a couple of hours.

AMDs Zen APUs are looking to be ridiculously power - they're the ones looking to the future, where there will be no market at all for low end cards because even mid tier APUs will do 1080p in most games.
Polaris 10 is a stopgap for the 470/480/X, vega will be the '490/X' (they'll keep the fury/radeon pro branding) as well as probably another 2 or 3 cards higher based on it.
Polaris 11 is for laptops which will have weaker APUs

I'm just worried that in the future hbm2 cards will be ridiculously expensive and gddr5 cards won't be fast enough anymore as an alternative.

I'm hype for Zen though senpai.

amd has done right by me far more then nvidia has.

still can't get over nvidia requiring dx10 and vista for 3d when i was looking into dropping the money to get a 120hrz 3d setup.

many of you will call it stupid, but you cant say that till you have played a game in 3d or vr. at that point in time (around 8-10 years ago), 3d was the closest thing to vr you would get, and they lost me because of that, and my next card after the 6800 ultra was a 5770 1gb and i refuse to look back till amd isn't in the ballpark performance wise any longer.

intel also fucked me hard with the p4, and along with its business practices i refuse to use them if amd is viable... zen is the deadline for amd to get viable before i get a 6 core intel, possibly an 8 core or dual cpu xeon setup, all depends on how much money im willing to throw at the issue.

they still use gcn as a backbone, so likely you will see gains, especially in the driver overhead departement when they fix shit like that.

not really, take a raw photo from an slr, that shit is 28+mb
toss data away that is over 8/10bit and you save a fuck load of space
then compress it losslessly and you save more
if you compress it lossy then you can make the file even fucking smaller with next to no degradation

thats kind of what videocards already do and new compression methods allow slower memory to act as though its faster then it really is.

and memory bandwidth isn't really a limiting factor for games till you hit 4k, and the 480/x isn't made to play 4k, its made for 1080 and 1440 if the demos are anything to go by

>How come AMD's hardware specs are always superior

Uh-huh.

My current prediction:
Polaris 11 - R7 460 (dropped with zen apu launch)
Polaris 10 (low) - R9 470
Polaris 10 (medium) - R9 480
Polaris 10 (high) - R9 480X (launched with vega)
Polaris 10 (maxed out) - R9 490
Vega (low) - Fury R5
Vega (medium) - Fury R7
Vega (high) - Fury R9

Because you're comparing Hawaii to GM200 you twat.

Territory performance

Only HBM2 matters for 4k 120hz, to pave the way for 8K. At the rate of adoption and sheer bandwidth consumption, market bottlenecks

AMD early HBM2 adoption vs Nvidia early release design

DisplayPort 1.4 supporting native FREESYNC is the end of gsync®

If you don't care about the noise then get a used dual socket Xeon server. They have an abundance of cores and processing power. It is possible to make them quieter too.

I feel the same as you do though. Either Zen is good or I'm sticking with Intel. My i5-2400 has served me fucking well.

HBM will inherently come down in cost as more and more gpus need it to achieve high bandwidth within a sensible power envelope. Right now HBM isn't really needed as you can still clock GDDR5 or GDDR5X high enough without starting a fire but that isn't going to be viable for long as heat density icnreases as gpus have more and more memory (and thus need more and more bandwidth).

HBM is the solution to this problem. HMC could've been as well but JEDEC said nyet on that.

I'm comparing two gpgpu pro cards released in 2015 in the same price segment.
And one of them outperforms the other with a good 25%.

What comparison do you want exactly?

I'm surprised AMD even has a pro line still, I've been working in the cgi industry for a good 5 years now and I'm yet to see a FirePro lmao.