I just realized, Vega is a turning point for AMD GPU division, they've done something not even ATi has done

I just realized, Vega is a turning point for AMD GPU division, they've done something not even ATi has done.

Vega is the first AMD arch since forever where AMD had bigger individual shaders than Nvidia.

Lets take a look at die sizes.

Vega 10 - 530mm2 - Shading Units: 4096
GP102 - 471 mm2 - Shading Units: 3840

Simple math says that Vega is 12.5% bigger than GP102, but only has 6.69% more shaders.

Even going back to Nvidia's G80, I don't remember this ever happening, Nvidia has always had bigger shader arrays than ATi/AMD.
Last Gen GM200, which is Titan Maxwell, only had 3072 at the same die size as FuryX, In fact the whole situation with Fury X and Vega having the same amount of shaders is completely unreal considering Vega has twice the transistor count as Fury X/Fiji

Now the question remains how buffed up are the individual shaders in Vega compared to Fiji? They almost have twice the transistors dedicated to them.

Other urls found in this thread:

anandtech.com/show/2116
anandtech.com/show/2231
en.wikipedia.org/wiki/Tiled_rendering
realworldtech.com/tile-based-rasterization-nvidia-gpus/
en.wikipedia.org/wiki/Memory_controller
youtube.com/watch?v=b_ZMOn0X6jw&list=WL&index=6
twitter.com/NSFWRedditGif

MOAR COARZ

Yes, Vega is basically Maxwell, including tiled rasterisation.

so what you're saying is that it's going to burn down my house.

The exact opposite what's happening with Vega?

If Titan XP's 250W TDP is going to burn down your house then yes, AMD's 250W TDP Vega will also burn down your house.

t. 5.1GHz 7700k owner

It's bigger cores, actually.

DESIGNATED

Vega 10(?) - 7.728 ALU per mm2
GP102 - 8.1528 ALU per mm2


Though 15% of the die size is dedicated stuff that's not the ALUs, but Nvidia does seem to have more of them per mm2 now.

How did you came up with the idea that dividing the die size by the #stream processors gives a good and comparable estimate of the size of the shaders?

Vega will have a radically different memory controller, different scheduler etc. I bet the shader size of Pascal might still be bigger, especially considering the lack of a hardware scheduler.

>POOR VOLTA

While Nvidia was fucking around with clocks for 2 gens AMD has been improving IPC

Because neither AMD nor Nvidia is gonna ever tell you the individual shader size, and everyone knows over 80% GPUs are ALUs, so there's no more accurate estimation besides getting a electron microscope.

Nvidia uses a good amount of die size on ROPs and geometry units, AMD doesn't, which again makes Nvidias shaders smaller.
Then you got the GDDR5 memory controller vs the HBM2, the HBM2 is quite a bit smaller, so again less room to fit in shaders.

And now AMD has also improved their clocks
Nvidia is LITTER RALLY FINISH

The Geometry units have 2x the throughput per clock in Vega, but what about the shaders? I doubt they're twice the IPC, at most I give them some 30-40%

only time will tell
>Just™ Wait™

Computex isn't that far off

still a over a month

You can't wait a 5 weeks?

No

My GPU died suddenly on me a week ago. I'm looking to spend 500 Canadian moneys on a replacement. What's the best thing I can get right now?

or are there any announcements/releases in less than a month that would give more options or lead to price cuts?

All these features Nvidia has years before AMD copied and stole them

Tiled based immediate mode rasterization Maxwell Sept 2014
FP16x2 performance Tegra X1 January 2015
512TB 49bit virtual address space Pascal April 2016
HBM2 support Pascal April 2016

AYYMD HOUSEFIRES, nothing but a shameless uninnovative copy

doesn't nvidia copy all of AMD's ideas too though?

>nothing but a shameless uninnovative copy

kek'd heartly

When unified shaders first became a thing didn't AMD have like 800 when Nvidia was at like 400?

Nvidia first to unified shaders & CUDA compute Nov 2006
First to FP64 compute, 2008

All these were first invented by Nvidia, AYYMD nothing but an inferior copy

Vega at Computex in May.
Even if it's shit, Nvidia might drop prices on the 1080 and 1070. Can't see them dropping prices on the 1080Ti and Titan though.

Nice meme. Are you also implying that Kaby Lake is a copy because it's x86_64?

Considering they are using GDDR5 memory, fucking yes.

Dumb underage Nvidiot.

Tiled based rasterization is a PowerVR technique from 1996, also used in like 10 different major selling products like consoles before Nvidia "invented" it.
FP16, aka binary16 is a 2002 standard, Nvidia's modern consumer GPUs, including the Titan XP are fucking 1/64 FP16 so what the fuck.
AMD GPUs have supported unified memory since 2013.
HBM* is a Hynix/AMD invention, AMD had several HBM GPUs in both enterprise and consumer in 2015.
Meanwhile Nvidia was backing HMC which bombed hard.

Now,
Unified shaders have been in Geforce 8 series, which is 2006, and AMD 2000 series, which were also 2006.
Nvidia has only moved from shader clocks to unified clocks in 2012, talk about decades behind.
ATi has introduced tessellation with TruForm in 2001.
AMD integrated the memory controller with K8
GDDR3/5 were largely ATi/AMD and Hynix efforts, again, Nvidia only got it with Fermi.


Get back to Sup Forums you cancer.

>Vega is the first AMD arch since forever where AMD had bigger individual shaders than Nvidia.

So considering GCN and it's successors have much greater IPC that Nvidia, and more shaders, and comparable clock Volta will be the next 200$ GPU trying to undercut 580X for budget market?

Found the virgin

...

...

People here really believe that Nvidia was first with unified shaders?

You're a moron and it shows

Tiled based deferred rendering is not the same as tiled based immediate mode rasterization

AMD HOUSEFIRES HD2000 series did not launch until May 2007, months late and behind Nvidia

Keep on lying though, uneducated ignorant AYYMDPOORFAG

Predicting performance would be pretty easy if they strapped in another 2048 cores, but they haven't, they didn't change the amount of ALUs at all, and considering the huge jump from 28nm SHP to 14nm FinFETs you can't really make a precise judgment on its performance, all we really know that Vega is clocked somewhere close to 1600MHz, is that boost or stock I don't really know.

anandtech.com/show/2116

>November 8, 2006

anandtech.com/show/2231

>May 14, 2007

Never ever trust AYYMDPOORFAG LIARS, they can't even get their facts right

en.wikipedia.org/wiki/Tiled_rendering

>Tile-based rasterization is nothing new in graphics. The PowerVR architecture has used tile-based deferred rendering since the 1990’s, and mobile GPUs from ARM and Qualcomm also use various forms of tiling.

Even fucking Larashit had it before Nvidia.
t. David Kanter

You know what's even funnier? Adreno which is a AMD GPU sold to Qualcomm, had it before Nvidia.

Tiled based defered rendering != Tiled based immediate mode rendering

Are you that stupid to understand such a simply fact? Oh yeah, you are, I mean you even lied about AMD having unified shaders in 2006 which they never had

realworldtech.com/tile-based-rasterization-nvidia-gpus/

Okay sir, I trust you more than Kanter, you know the guy who actually brought the subject up first.
Just admit that Nvidia can't make new innovations for shit, it's not difficult.

Nvidia is way more innovative than you

First to unified shaders, first to GPU compute in 2006 while AYYMD couldn't respond until 2012

for 12 year olds 5 weeks is like forever.

I hope they are, I'm not a private IP company.

>first to unified shaders
By 7 months.

About the same time it took for Nvidia to pull out Fermi ;)
Which was still using hotclocks, some 10 years after ATI moved from them, also GDDR5.


Nvidia : 2
ATi/AMD: 10

Imageon = Adreno
Imageon = 2002
Tile based rendering , 2002-2015 Ati/AMD PowerVR, MS Talisman, Dreamcast, Larabee
Tile based rendering, 2014 - Nvidia

Welcome aboard Nvidia.

Nvidia first to HBM2, GDDR5X in 2016, AYYMD still can't use higher clocked GDDR5 because their GPUs consume too much power, ahahahaha

Even first to GDDR3 back in 2013

Stay mad, uneducated faggot

>le you-have-to-be-18-(eight-teen)-years-old maymay

ebin :-----------DDDD

None of these are ever used in a PC

Stay mad, uneducated faggot

HBM? Another AMD invention? Where's nvidia's Hybrid memory cube?
HBM2? Take a look at the Mi25, with HBM2.
>Graphics DDR3 SDRAM (GDDR3 SDRAM) is a type of DDR SDRAM specialized for graphics processing units (GPUs) offering less access latency and greater device bandwidths. Originally designed by ATI Technologies,[1] it has since been adopted as a JEDEC standard.
>Originally designed by ATI Technologies,[1] it has since been adopted as a JEDEC standard.


>i-i-it doesn't c-count..


Your tears are succulent.

Let's recap.
AMD/ATI:

First to unified clocks.
Inventors/Joint inventors of HBM and GDDR
First products with GDDR/HBM
First to tessellation
First to unified memory
First to fully enabled integrated memory controller


Nvidia:
First to unified shaders by 7 months
First to launch a tactical weapon called Fermi
I guess also first to to install spyware so you can record your gaming

>Never ever trust AYYMDPOORFAG LIARS, they can't even get their facts right

22 November 2005, ATI Xenos on Xbox 360 a full WHOLE YEAR before Nvidia G80.

>I guess also first to to install spyware so you can record your gaming

>console

Not PC

every once in a while I enter these threads in hopes of learning something new or getting some info on upcoming gpus, but instead am treated to a few dudes flinging shit at each other and not being productive in any way

this board fucking sucks, you're all autists

no. at least not this year and won't be called vega

wasn't gpu in 360 from amd?

en.wikipedia.org/wiki/Memory_controller

>Some microprocessors in the 1990s, such as the DEC Alpha 21066 and HP PA-7300LC, had integrated memory controllers

Still lying about AMD inventing things? Pathetic

>Team Green on full damage control

wew lad

Oh man that backpedalling.

Just grit your teeth and admit it, Nvidia can't invent shit, the only decent thing they invented is CUDA, which is pretty damn good parallel compute framework.
PhysX isn't even their invention.

Half finished job

>DEC Alpha 21066 and HP PA-7300LC, had integrated memory controllers; however, rather than for performance gains, this was implemented to reduce the cost of systems by eliminating the need for an external memory controller.

As far as I know AMD's IMC was revolutionary.

What difference does it make when you claim that Nvidia was first? And that's the reason the PC launch was delayed. Plus Windows Vista came in november 2006 anyway,

AYYMD didn't invent anything, they bought ATI garbage, 5B for a shit company with shit technology, TOP KEK

Still over 2B in debt

Radeon will always disappoint as long as that pajeet is in charge

SiliconDoc please.
You're embarrassing INElite-sensei

ATI didn't invent anything either, they bought ArtX, another company

Everything Nvidia did, Nvidia invented it without buying another company

3dfx

>what is 3dfx

>3dfx
>ageia
Lmao

Nvidia only bought 3DFX patents and IP, they didn't buy the company, try educating yourselves, uneducated morons

Why are you arguing here when you don't even know your own fucking history?

This is some next gen brain damage, you don't even have to be born in the 1980's to know this, you can fucking GOOGLE it in a second.

Yes, Xenon was made by AMD.

This needs popcorn

user, I don't even know how to react to this
patents and IP is THE COMPANY

Brain damage.

Nvidia is the most innovative company, first to GPU compute, first to variable refresh rate, first to FP16x2 performance

AYYMDPOORFAGS here are all uneducated AYYMD asskissers and asslickers unable to think beyond worshipping a shit company that is dying from bankruptcy

...

>Now the question remains how buffed up are the individual shaders in Vega compared to Fiji? They almost have twice the transistors dedicated to them.

I actually thought this was because they're buffing up FP64 again, but it's not the case, Vega 20 is apparently the FP64 chip, not Vega 10.
So the extra transistors go to pure single precision

>Nvidia only bought 3DFX patents and IP, they didn't buy the company

That's different though. Nvidia doesn't have a history of housefires like AMD does. 250W TDP of Nvidia doesn't actually mean 250W like it does for AMD. AMD's is worse.

Lol? Here's AMD last 250W TDP card, the 7970, everything else high end has been 280-300W

>Nvidia doesn't have a history of housefires
>what is fermi
I thought you have to be 18 to visit this site

And here's the 250W Pascal (not XP)

Wait.
So even the 500W PSU is excessive?

Nvidia only bought their ideas and inventions, they didn't buy the company

First of all, TDP stands for thermal design power, not power consumption.
Secondly, everyone changes how they measure TDP every generation, it's retarded.
And Intel's mobile CPUs are the worst culprits.

>how do i greentext

Depends if you're overclocking, how much drives and shit you have.

Even at that you'll be fine, but I'd personally always leave some 40% of the PSU unused due to efficiency

People also forget to mention how much power the VRM and VRAM use, a good 20-30W for the high end GPUs alone goes to VRM, and another 50-60W for high clocked GDDR5

The GPU dies themselves are insanely efficient, this is why APUs are fucking awesome when they have a shitload of shaders in them.

Op = fag?

Slightly less for the VRM, but you're right.

All of this means nothing because apis are dinosaurs besides vulkan

Remember seeing a chart recently what showed that ATI/AMD have historically a lower power consumption than Nvidia. That chart even had a extra 1000 mark just for one Nvidia card.

>Nvidia doesn't have a history of housefires

>Nvidia doesn't have a history of housefires
How new are you?

>what is fermi

Nvidia invented housefires.

>lets look at high level measurements without concern for lower level details which developers will have to concern themselves with to gain substantially.
It's far more important for mainstream games on pc to consider the driver overhead right now. If your gpu architecture facilitates the driver to work more effectively you can see good performance.
It's rare to be fragment shader bound.

>you merely adopted the fire... i was BORN in it

>GCN will be deprecated and driver development won't continue
Well, it was a wild ride
PowerVR Kyro were launched for PC's in 2002
ATi Stream and the Close-To-Metal initiative predate CUDA
Also, BrookGPU predates both
Stop lying dumb Pajeet
/thread
Nvidia Pajeets and Sup Forumsfags love to lie
They also got most of their employees
Totally not buying the company, right?
Variable refresh rate was already part of the eDP VESA standard

Vega ia still a 16 wide vector arch, GCN is alive.
Though there's really no performance to squeeze out of anything older than Polaris now.

It's happening boyz!
Raja is teasing Vega
youtube.com/watch?v=b_ZMOn0X6jw&list=WL&index=6