We should blame the 14nm Globalfoundries process or the Polaris architecture?

We should blame the 14nm Globalfoundries process or the Polaris architecture?

Other urls found in this thread:

youtube.com/watch?v=V54W4p1mCu4
arstechnica.com/apple/2015/10/samsung-vs-tsmc-comparing-the-battery-life-of-two-apple-a9s/
tomshardware.com/news/iphone-6s-a9-samsung-vs-tsmc,30306.html
golem.de/news/geforce-gtx-1060-vs-radeon-rx-480-das-bringen-direct3d-12-und-vulkan-1607-122214.html
twitter.com/SFWRedditImages

Lmao

Why are AMD cards so bad with power.

Difference in architecture.

nvidia chose to optimize for serial que and use software que to make optimizations.
amd chose to optimize for parallel que and use hardware to optimize games

hardware option cost extra power usage. this is what you see

amd and nvidia took different approaches to architecture .

those differences have pros and cons, one of the cons for amd is power consumption .

Then why is the 1060 also faster?

Seems like the 1060 is just better in every way.

Because nvidia don't want fermi 2.0. AMD are so incompetent

Benchmarks are influenced by game settings and game choice. Look at multiple benchmark reviews and create a meta analysis. Find out what the outliers are and try to eliminate any bias.


>youtube.com/watch?v=V54W4p1mCu4

Meta analysis of the GTX 1060 review here, if you don't want to spend 20-30 minutes crunching data.

I'd blame the process more than the architecture. Just look at TSMC 16nm vs Samsung 14nm (very similar to GloFo 14nm) with the iPhone 6s' A9: arstechnica.com/apple/2015/10/samsung-vs-tsmc-comparing-the-battery-life-of-two-apple-a9s/

The Samsung A9 consumed ~40% more power at load than the TSMC A9.

I said that 2 month ago and all the AMDfags bully me :c

Neither. Its the result of a design choice to ensure the Polaris 10 die was as cheap as possible.

AMD always had higher power draw.

Stop regurgitating debunked nonsense

tomshardware.com/news/iphone-6s-a9-samsung-vs-tsmc,30306.html

Will ARM ever replace x86?

In the future will my desktop PC just be a dock for my phone? That would be so cool.

Supplemental

AMD might have chosen to make their cards more future proof by enabling new tech, while nvidia chose to optimize for the benchmarks and games that are out right now, so most tests will make the competitor look shit.
Now who has made the better choice financially? What use is the "oh my god, the 480 is great even after 5 years" when AMD is bankrupt at that point, not selling any new cards anyway, because the old one is just as good as the new one?
They made some great products, but they suck at getting money, and they will die for that.

...

nah

The reason for the higher power draw is because AMD cards still use hardware schedulers. Nvidia removed their hardware schedulers after the 400 series proved to be a disaster for heat and power draw.

Why would anyone do that, just go on the internet and be stupid?

The Polaris 10 die draws 110w nominally. Of its total target power of 150w the memory allocates 40w. That 8GB of 8Gbps GDDR5 sucks down that much power.
The die itself only pulls 110w~ because of how its clocked, and its only clocked that high to compensate for having 32 ROPs.
It only has 32 ROPs to ensure the layout was as stupid simple as possible
The whole layout was designed to maximize yields, to be as economical as possible.

The hardware schedulers in the ASIC have nearly nothing to do with the total power draw figure. The SIMD lanes and memory PHY are the larges power drawing blocks by a long shot.

Not in our lifetimes.

Yeah the only way X86 is getting usurped is if the arch replacing it can still execute X86 code with near 1:1 parity.
No one is going to give up the massive software library we've built up.

is that the Sup Forums equivalent to this Sup Forumsem?

>reduced power over the 380
>bad

Didn't they fix this?
>MFW my 780Ti consumes less power in idle even on multimonitor

Any ACX 1070 fag here, would you recommend it to me? I was also thinking about the 1060 but I need something that can run 4K decent (High settings).

>DX12 bans multithreading
You can't make this shit up.

Is there any proof at all to this? Or is this that the market is heading in a certain direction which AMD predicted and nVidia did not?

Its a bait. Either by a retarded nvidiot with no idea what he's typing or a false flag kek'r.

A 7850 beats your card in Vulkan doom.
I had a 770 and I liked it before I sold it

Because of the gimped drivers. Unfortunately there is no AMD alternative. As I said I'm running 4K.

Why would nvidia Gimp their own drivers

I am fucking done with AMD.

Everywhere I turn I see evidence of Nvidias superiority.

AMD is always the underdog. WELL FUCK THEM.

I will gladly pay more for Nvidia since they cant see to fucking be competitive!

You all know in your heart of hearts, Nvidia was always better.....

>Why would nvidia Gimp their own drivers
To get people to upgrade? In case you didn't know they gimp older cards in their newer drivers all the time.

The biggest issue with AMD is the RAM speed, it typically only has 2 settings, super low, or maxed out. If you are on a monitor that is 1440p+ above a refresh rate of 60, or dual displays, the RAM gets jacked up to max speed. On my 390, I use a profile to keep my VRAM speed clocked at 150mhz, so it doesn't spike at max speed, and allow it to sit at a watt draw of less than 20watts. And the reason for a bit higher draw on gaymen, the VRAM uses a bit more ppwer than the nvidia counterparts, since it's ramped up like hell, and double the bandwith speed.

I support this and apples planned obsolescence.

Its true, look it up. They gimp their own games, I can't play Witcher for example.

>3 yuans have been deposited to your green account, the Way its Meant to be Banked(tm)

Why does anyone care about power consumption anyway... I was rocking twin 295x2s for a while. Roughly 12000 watts at full load. Zero fucks given.

>12000 watts

Whoops.... 1200.

I never felt it was very important, most people are not running their system under load 24/7.

laptops?

Not running under load is actually worse for AMD. With two non-identical monitor attached or when playing video, the power consumption is 6 times that of Nvidia. Look at OP's pic.

your parents will care since they are complaining about the power bill :^)

>mobile

AMD is just retarded now. They used to be good but now every product is a failure.

>using the smiley with a carat nose

what stop nvidia user from using opengl.

>overclocked non reference board vs a amd reference
here lets make the comparison more equal

more

more...

blame the architecture. the difference between samsung 14nm and tsmc 16nm isn't large enough to account for the literally 2x perf/watt that amd is behind nvidia now, that's mostly the result of the fact that they have no made real improvements from gcn 1.0.

i suspect that AMD reinvested most of the saved money from firing engineers into hiring substandard H1B import labor and paying to shill/astroturfing marketing firms and the like.

nice faked images rajesh, there's plenty of evidence that the rx 480 does not overclock over 1350mhz core clock without LN2 and volt modding.

because more power used = more heat and more fan noise

>being this retarded

>don't disagree with me or point out my lies or else i'll call you retarded

>What use is the "oh my god, the 480 is great even after 5 years" when AMD is bankrupt at that point, not selling any new cards anyway, because the old one is just as good as the new one?

cards like the 480 are never even relevant after 5 years, you'd be lucky for even a flagship card to be good enough to game at low/med settings after that long.

cards like the 970, 480, 1060 will only have about 1/2 more years of decent performance to extract before they're completely obsolete.

>don't disagree with me or point out my facts or else i'll call you retarded

This, I don't think we should blame Samsung/Globalfoundries. The architecture is not optimized because no money.

>constantly redirecting to leddit
yea thats a thing that Sup Forums spillage does all the time, you really need to go back

>no amdrone has addressed these yet

>facts

where did you get these factual benchmarks?????

480 is so much fail

you are so much shill

>rated at 150W
>pulls 163W on average while gaming
When will AMD stop lying.

When will you stop being a newfag?

nvidia separately clocks every aspect of their cards, amd only has one clock rate throughout.

its also why you nee near linear gains when amd oc, and nvidia... well... you oc it FAR more and get far less if you can even call what you do with nvidia overclocking.

heaven benchmark, only time it was caught, there is also the 500 series that has to use older drivers then the card supports because of blatant gimping

as of late, nvidia just has game devs put things like retard levels of tessellation in games and refuses to put in limiters in driver for older cards, along with other effects they abuse that run better on a 970/980 and soon the 1070/1080/1060 and don't support 1 gen old hardware.

proof to what this retard says?
amd has crappy drivers, they have an overhead, why, i cant tell you for sure, but it seems like it has to do with a-sync, they were ready for a-sync when the 7000 line came out.

then comes along no new api that really helps amd, along with what was shown of dx12 only being 'look, we have texture streaming now' so they made mantle.

mantle removed the overhead of dx11 and gave amd a fairly large boost on crappy cpus and even on good ones
then it also had async that is responsible for about 10% more performance.

you see nvidia lose out because they made a dx11 asic that has some ability to more then just that, so they can emulate async, which is why they take a performance hit when its introduced.

volta will likely have true parallel async, but this shit they have now is just a 'look we can do it too' for marketing.

i have argued this for a while, why not have a new instruction set, with a legacy cpu socket?

280x here, still playing most games either maxed out or with mixed medium and high settings, even then, i could have stuck it out with the 5770 for a few more years,

>mfw dx12

...

>implying AMDrones are smart enough to notice stuttering

...

Feels good when you can upgrade to a new GPU for free in around 6 years because you chose a GPU that doesn't consume a shitload of power.

#triggered

>#

>removes project cards because of "nvidia optimization"
>doesn't remove hitman because of its heavy optimization for GCN

Lol I always knew this guy was a fraud. This video is next levels of butthurt.

>comparing 2 diff uarch
>thinking its the same
>using tpu as a reliable source

hitman doesnt have anything that offloads on the cpu like physx is doing
i fail to see how its the same

That is just how nvidiot reasoning works

#reallytriggeredrightnow

>what is architectural optimization
>nvidia does it: reeee nvidia so evil
>amd does it: i-its perfectly f-fine

Logic does not work on a desperate shill user.

golem.de/news/geforce-gtx-1060-vs-radeon-rx-480-das-bringen-direct3d-12-und-vulkan-1607-122214.html

Some interesting results all told.

polaris is more efficient than pascal

>The hardware schedulers in the ASIC have nearly nothing to do with the total power draw figure
yeah bullshit. nvidia are literally not capable of using ACEs because fermi was a housefire when they tried so they just got rid of them and went back to 2005 level hardware with maxwell and paxwell

I used to watch him but his newer vids are a massive turn-off when he went full AMD fanboy mode.

AMDRONES ON SUICIDE WATCH

Can someone explain what I'm looking at here?

>10 yr old cpus

who cares

>w-who cares

People that these cards are marketed towards.

oh please. at least test 2500k. or whatever the amd equiv is.

I'm getting a 1060 myself but testing an i5-750 is just retarded

The 1060 isn't that cheap, its retarded to pair it with systems that are worth less than the card only. A current i3 or i5 makes much more sense to compare, and even then you're going to see loads of normie builds pairing it with i5k or i7s

Anyways, its not news that nvidia cards run better than amd equivalents on lower tier CPUs

And at stock speeds no less when every one of those chips is capable of 3.5Ghz minimum, more usually 3.8-4

You do realise that 50% of steam users only have dual core cpus, right? Pairing it with an i5 750 is perfectly logical since a lot of people who want to play the latest games won't think of upgrading their cpu but their gpu only. This is even more evident when we see that the 2nd most used gpu in the world is a gtx 960. With 50% of people using dual core cpus and pairing it up with an x60 series gpu, it makes perfect sense to say that these are the people this card is marketed at. People keep going on about muh 980 performance 1060 but they forget that it's still a fucking x60 series gpu, which would be the gpu of choice for these people who want to upgrade their toasters since its midrange card.

50% of steam survey responders are likely on a laptop than not, where even a lot of "i7"s are dual core
Laptops, you know, that thing where a dGPU is practically of no worth whatsoever.

>You do realise that 50% of steam users only have dual core cpus

Thats because laptops and the vast majority of laptopsare running cpus that are faster (more IPC) than a decade old desktop chip.

what are the pros of amd?

A simple examp-le is GCN doesn't give a solitary fuck about context switching - you throw any sort of workload at GCN and it will crunch the lot with no fucks to give. You start feeding Nvidia chips with graphics and compute workloads without proper scheduling the card will choke as it will make lots of latency intensive context switches.

>all the 50% of people using dual core cpu are running laptops and there aren't people in the world who are using old toasters with old dual core cpu