What exactly happened here? Why does Polaris have the same efficiency as a 28nm part? Look...

What exactly happened here? Why does Polaris have the same efficiency as a 28nm part? Look, Fiji was very close to perf/watt with Maxwell.. So why does it look like AMD completely shelved whatever magic they did with Fiji and went with something different with Polaris? Couldn't just a shrink of Fiji net a much higher perf/watt than what we have now?

Any theories on this extremely unusual phenomenon?

HBM requires less power than GDDR5.

I'll tell you why, it's because Polaris was made to be cheap as possible.
That means.

- Removing power transistor budget (in short, removing low power transistors that roughly make other transistors use less power, that's a thing nowadays and uses a lot of die area)
-Fiji possibly couldn't scale well so it wasn't worth the effort

Since some people aren't aware of this, some parts of the die cost more to fab than the others, they're usually not super-replicated like the ALUs that's why you don't see 96ROPs on AMD's GPUs, Polaris being a midrange part, can run fine with one third of that, and half of Fiji.

tl;dr AMD doesn't consider the market Fiji is targeting ripe for a large swath of power reducing features and IP, Vega will be what you call a Fiji replacement, perf/watt and all, so expect something similar to Pascal.

Some 20-30W wouldn't account in such a large perf/watt disparity.

>consider the market Polaris is targeting*

Speaking of ROPs, they always seem to be a thorn in AMDs sides, they haven't changed at all from 7 years ago to now, they still can only do 16 pixels per clock.

Come on, nobody actually wants to discuss this? I thought you guys loved architectural details

The RX 480 is a factory overclocked card, and its 8GB of VRAM is drawing 40 watts by itself.
See pic related

It's not, lower the clockspeed on a 480 and there won't be a large perf/watt increase.
The cards voltage is set far too high to increase the number of sellable chips.

Polaris 10 is designed and binned for yields and profitability. Not max efficiency and performance.

So AMD lie when they said Polaris gets better performance per watt?

Polaris 11 (rx 460) gets better performance per watt than Maxwell by a significant gap. Polaris 11 was designed and binned as a mobile/low power part.

Don't try to argue with people smarter than you, chump.
The voltages in pstates are only set so generous to ensure as many dies as possible could hit the extremely high stock clocks.
It is a factory overclocked card, pushed as high as possible to increase pixel throughput from only having 32 ROPs.

the 950? my intel igpu gets better performance per watt too if you're settling for 480p

RX 460 is faster than the GTX 950 on the same power envelope.

No, compared to Hawaii it's significantly more efficient.

Is Vega gonna at least compete on perf/watt?

More ROPs, more CUs, lower clocks, HBM instead of GDDR5.

Yeah it'll compete on perf/watt.

So that means the 480 will remain a housefire like were seeing now?

150-160w isn't a housefire.

I don't care as long as it's fast and cheap.

>160W, out of compliance with PCIe power spec
>Much slower than 150W GTX 1070, 82% more perf per watt thank to superior Nvidia engineering

AYYMD HOUSEFIRES
AYYMD HOUSEFIRES EVERYWHERE

You forgot
>1070 3% yields
>RX 480 90%+ yields

Polaris was meant to be made to be fabbed cheap as possible, not like GP106 wasn't as well, but Nvidia spends considerably more on R&D than both CPU and GPU divisions of AMD.
We'll see next quarter how much units they moved on their report, AMD expects a 10% increase in revenue

I want AMD to get out of the normalfag markets like IBM did, stick to embeded and custom silicon

Glofo has no yields, GTX 1070 is fully available & in stock

How retarded are you? You basically contradicted yourself in your own post. A chip that can only exist because of wafer defects(aka what we call bad yields) is in supply while its fully enabled and much higher marketshare brother is in short supply.
What does that tell you? Certainly not that TSMC has good yields.

>much lower marketshare

>Pascal level performance
>This is what AMDfags ACTUALLY BELIEVE

maybe in a few years, just like how it took AMD 2 years to catch up with the GTX 970

You seriously dont think a tiny company run by poo in loos can match Nvidia's engineering

Dumb frogposter.

Oh for fucks sake, you're the retard who's been shouting "factory overclocked factory overclocked" for every damn AMD part for the last twelve months.

I'm sure by now hundreds if not thousands of Sup Forums users wish you would kill yourself.

>I got proven wrong
>now I'm going to just make shit up because I'm an autist

Cute, kid.
GCN is a low clocking arch, and the RX 480 by far has the highest stock clock of any card in the family. Matter of factly it is clocked far, far outside of its sweet spot, and that is plain as day.

The Ellesmere/Polaris 10 die wouldn't need such high clocks if it had 64 ROPs. The only area it falls behind in is raw pixel throughput, otherwise its 36 4th gen CU are enough to outperform the 44CU in Hawaii/Grenada.

Its a factory overclocked card. This is 100% fact.

amd wanted to make a cheap chip.
so they overclock it.

It's kinda weird, GP106 is both smaller but has more ROPs, and this is with the slightly density advantage GloFo has over TSMC

GloFo's 14nm is shit compared to TSMC

Adults are talking, go sit in a corner child.

The Polaris 10 die was designed to be as spaced out as possible, balancing vs die area. The structures within the CU are not at all dense, its the most sparsely laid out GCN design ever.
Everything was done to make it yield as high as possible.

>how to trigger an AMDrone

If you buy AMD, youre settling because you couldnt afford a Nvidia, and youre paying the price by getting tech that has Rajeet-coded drivers and runs hotter than their bowl of spicy curry for lunch. Simple as that

Why do you rats feel the need to butt into topics you don't know a thing about?

the target clock for gcn is 1ghz, any card released over that clock speed was either a rebrand that got a factory OC or chips like fiji and p10 which had to have their clocks increased last minute to compete with nvidia.

So why doesn't AMD just stick a shitload of hardware and go with low clocks?

Die size and yields.
The RX 480 has massive profit margins and extremely high yields for a new manufacturing process.

Because GlobalFoundries/Samsung is shit

Why are Nvidiots so stupid?

>blame nvidia

So....if it's not the 14nm process it's the shitty Polaris architecture?

I'm just curious, and know nothing about this stuff, but what makes Nvidia's cards much more power efficient?

It's read the thread you nimrod, plenty of people explained.

>he don't answer

It's the 14nm or the architecture?

There are too many differences between the two to list.
Each company has a slew of IP dedicated specifically to saving power per op. AMD and Nvidia use different approaches, and their architectures are radically different from each other.

Since you seem to live in a black&white world, it would be the architecture.

The rx 480 and gtx 1060 basically costs the same due to inflated prices.

i'm not paying extra for invidia when i can get the same performance for cheaper if i buy AMD and get their features for free instead of paying extra for a fucking software restricted feature like g-sync/freesync.

I don't understand why people didn't see the 480 as another 4850, mid performance with a cheap as fuck die.