Simple math says that Vega is 12.5% bigger than GP102, but only has 6.69% more shaders.
Even going back to Nvidia's G80, I don't remember this ever happening, Nvidia has always had bigger shader arrays than ATi/AMD. Last Gen GM200, which is Titan Maxwell, only had 3072 at the same die size as FuryX, In fact the whole situation with Fury X and Vega having the same amount of shaders is completely unreal considering Vega has twice the transistor count as Fury X/Fiji
Now the question remains how buffed up are the individual shaders in Vega compared to Fiji? They almost have twice the transistors dedicated to them.
Yes, Vega is basically Maxwell, including tiled rasterisation.
Henry Sanders
so what you're saying is that it's going to burn down my house.
Juan Morgan
The exact opposite what's happening with Vega?
Liam Long
If Titan XP's 250W TDP is going to burn down your house then yes, AMD's 250W TDP Vega will also burn down your house.
t. 5.1GHz 7700k owner
Nathan Fisher
It's bigger cores, actually.
Juan Barnes
DESIGNATED
Ethan Lee
Vega 10(?) - 7.728 ALU per mm2 GP102 - 8.1528 ALU per mm2
Though 15% of the die size is dedicated stuff that's not the ALUs, but Nvidia does seem to have more of them per mm2 now.
Benjamin Cooper
How did you came up with the idea that dividing the die size by the #stream processors gives a good and comparable estimate of the size of the shaders?
Vega will have a radically different memory controller, different scheduler etc. I bet the shader size of Pascal might still be bigger, especially considering the lack of a hardware scheduler.
Jacob Hernandez
>POOR VOLTA
Jordan Powell
While Nvidia was fucking around with clocks for 2 gens AMD has been improving IPC
Tyler Torres
Because neither AMD nor Nvidia is gonna ever tell you the individual shader size, and everyone knows over 80% GPUs are ALUs, so there's no more accurate estimation besides getting a electron microscope.
Nvidia uses a good amount of die size on ROPs and geometry units, AMD doesn't, which again makes Nvidias shaders smaller. Then you got the GDDR5 memory controller vs the HBM2, the HBM2 is quite a bit smaller, so again less room to fit in shaders.
Jace Perry
And now AMD has also improved their clocks Nvidia is LITTER RALLY FINISH
Brody Lee
The Geometry units have 2x the throughput per clock in Vega, but what about the shaders? I doubt they're twice the IPC, at most I give them some 30-40%
Luis Cox
only time will tell >Just™ Wait™
Adrian Nelson
Computex isn't that far off
Justin Clark
still a over a month
Ian Nelson
You can't wait a 5 weeks?
Evan Peterson
No
Asher Jones
My GPU died suddenly on me a week ago. I'm looking to spend 500 Canadian moneys on a replacement. What's the best thing I can get right now?
or are there any announcements/releases in less than a month that would give more options or lead to price cuts?
Jose Mitchell
All these features Nvidia has years before AMD copied and stole them
Tiled based immediate mode rasterization Maxwell Sept 2014 FP16x2 performance Tegra X1 January 2015 512TB 49bit virtual address space Pascal April 2016 HBM2 support Pascal April 2016
AYYMD HOUSEFIRES, nothing but a shameless uninnovative copy
Jordan Walker
doesn't nvidia copy all of AMD's ideas too though?
Benjamin Kelly
>nothing but a shameless uninnovative copy
kek'd heartly
Logan Moore
When unified shaders first became a thing didn't AMD have like 800 when Nvidia was at like 400?
Dylan Diaz
Nvidia first to unified shaders & CUDA compute Nov 2006 First to FP64 compute, 2008
All these were first invented by Nvidia, AYYMD nothing but an inferior copy
Jace Campbell
Vega at Computex in May. Even if it's shit, Nvidia might drop prices on the 1080 and 1070. Can't see them dropping prices on the 1080Ti and Titan though.
Nice meme. Are you also implying that Kaby Lake is a copy because it's x86_64?
Robert Bennett
Considering they are using GDDR5 memory, fucking yes.
Adam Watson
Dumb underage Nvidiot.
Tiled based rasterization is a PowerVR technique from 1996, also used in like 10 different major selling products like consoles before Nvidia "invented" it. FP16, aka binary16 is a 2002 standard, Nvidia's modern consumer GPUs, including the Titan XP are fucking 1/64 FP16 so what the fuck. AMD GPUs have supported unified memory since 2013. HBM* is a Hynix/AMD invention, AMD had several HBM GPUs in both enterprise and consumer in 2015. Meanwhile Nvidia was backing HMC which bombed hard.
Now, Unified shaders have been in Geforce 8 series, which is 2006, and AMD 2000 series, which were also 2006. Nvidia has only moved from shader clocks to unified clocks in 2012, talk about decades behind. ATi has introduced tessellation with TruForm in 2001. AMD integrated the memory controller with K8 GDDR3/5 were largely ATi/AMD and Hynix efforts, again, Nvidia only got it with Fermi.
Get back to Sup Forums you cancer.
David Ortiz
>Vega is the first AMD arch since forever where AMD had bigger individual shaders than Nvidia.
So considering GCN and it's successors have much greater IPC that Nvidia, and more shaders, and comparable clock Volta will be the next 200$ GPU trying to undercut 580X for budget market?
Evan Scott
Found the virgin
Ethan Adams
...
John Bennett
...
Brody Smith
People here really believe that Nvidia was first with unified shaders?
Ayden Clark
You're a moron and it shows
Tiled based deferred rendering is not the same as tiled based immediate mode rasterization
AMD HOUSEFIRES HD2000 series did not launch until May 2007, months late and behind Nvidia
Keep on lying though, uneducated ignorant AYYMDPOORFAG
Eli Gonzalez
Predicting performance would be pretty easy if they strapped in another 2048 cores, but they haven't, they didn't change the amount of ALUs at all, and considering the huge jump from 28nm SHP to 14nm FinFETs you can't really make a precise judgment on its performance, all we really know that Vega is clocked somewhere close to 1600MHz, is that boost or stock I don't really know.
>Tile-based rasterization is nothing new in graphics. The PowerVR architecture has used tile-based deferred rendering since the 1990’s, and mobile GPUs from ARM and Qualcomm also use various forms of tiling.
Even fucking Larashit had it before Nvidia. t. David Kanter
You know what's even funnier? Adreno which is a AMD GPU sold to Qualcomm, had it before Nvidia.
Hunter Gonzalez
Tiled based defered rendering != Tiled based immediate mode rendering
Are you that stupid to understand such a simply fact? Oh yeah, you are, I mean you even lied about AMD having unified shaders in 2006 which they never had
Okay sir, I trust you more than Kanter, you know the guy who actually brought the subject up first. Just admit that Nvidia can't make new innovations for shit, it's not difficult.
Ryder Wood
Nvidia is way more innovative than you
First to unified shaders, first to GPU compute in 2006 while AYYMD couldn't respond until 2012
Ethan Brooks
for 12 year olds 5 weeks is like forever.
Christian Hughes
I hope they are, I'm not a private IP company.
>first to unified shaders By 7 months.
About the same time it took for Nvidia to pull out Fermi ;) Which was still using hotclocks, some 10 years after ATI moved from them, also GDDR5.
Nvidia : 2 ATi/AMD: 10
Daniel Cruz
Imageon = Adreno Imageon = 2002 Tile based rendering , 2002-2015 Ati/AMD PowerVR, MS Talisman, Dreamcast, Larabee Tile based rendering, 2014 - Nvidia
Welcome aboard Nvidia.
Ayden Price
Nvidia first to HBM2, GDDR5X in 2016, AYYMD still can't use higher clocked GDDR5 because their GPUs consume too much power, ahahahaha
HBM? Another AMD invention? Where's nvidia's Hybrid memory cube? HBM2? Take a look at the Mi25, with HBM2. >Graphics DDR3 SDRAM (GDDR3 SDRAM) is a type of DDR SDRAM specialized for graphics processing units (GPUs) offering less access latency and greater device bandwidths. Originally designed by ATI Technologies,[1] it has since been adopted as a JEDEC standard. >Originally designed by ATI Technologies,[1] it has since been adopted as a JEDEC standard.
>i-i-it doesn't c-count..
Your tears are succulent.
Wyatt Carter
Let's recap. AMD/ATI:
First to unified clocks. Inventors/Joint inventors of HBM and GDDR First products with GDDR/HBM First to tessellation First to unified memory First to fully enabled integrated memory controller
Nvidia: First to unified shaders by 7 months First to launch a tactical weapon called Fermi I guess also first to to install spyware so you can record your gaming
Owen Hernandez
>Never ever trust AYYMDPOORFAG LIARS, they can't even get their facts right
22 November 2005, ATI Xenos on Xbox 360 a full WHOLE YEAR before Nvidia G80.
Camden Miller
>I guess also first to to install spyware so you can record your gaming
Daniel Gomez
>console
Not PC
Owen Jones
every once in a while I enter these threads in hopes of learning something new or getting some info on upcoming gpus, but instead am treated to a few dudes flinging shit at each other and not being productive in any way
this board fucking sucks, you're all autists
Parker Lewis
no. at least not this year and won't be called vega
>Some microprocessors in the 1990s, such as the DEC Alpha 21066 and HP PA-7300LC, had integrated memory controllers
Still lying about AMD inventing things? Pathetic
Camden Mitchell
>Team Green on full damage control
wew lad
Thomas Ramirez
Oh man that backpedalling.
Just grit your teeth and admit it, Nvidia can't invent shit, the only decent thing they invented is CUDA, which is pretty damn good parallel compute framework. PhysX isn't even their invention.
Juan Brown
Half finished job
>DEC Alpha 21066 and HP PA-7300LC, had integrated memory controllers; however, rather than for performance gains, this was implemented to reduce the cost of systems by eliminating the need for an external memory controller.
As far as I know AMD's IMC was revolutionary.
Carter Peterson
What difference does it make when you claim that Nvidia was first? And that's the reason the PC launch was delayed. Plus Windows Vista came in november 2006 anyway,
Camden Johnson
AYYMD didn't invent anything, they bought ATI garbage, 5B for a shit company with shit technology, TOP KEK
Still over 2B in debt
Xavier Barnes
Radeon will always disappoint as long as that pajeet is in charge
ATI didn't invent anything either, they bought ArtX, another company
Everything Nvidia did, Nvidia invented it without buying another company
Logan Stewart
3dfx
Jeremiah Hernandez
>what is 3dfx
Camden Thomas
>3dfx >ageia Lmao
Nathaniel Diaz
Nvidia only bought 3DFX patents and IP, they didn't buy the company, try educating yourselves, uneducated morons
Sebastian Edwards
Why are you arguing here when you don't even know your own fucking history?
This is some next gen brain damage, you don't even have to be born in the 1980's to know this, you can fucking GOOGLE it in a second.
Anthony Kelly
Yes, Xenon was made by AMD.
Christopher Gray
This needs popcorn
Landon Brooks
user, I don't even know how to react to this patents and IP is THE COMPANY
Thomas Murphy
Brain damage.
Eli Thomas
Nvidia is the most innovative company, first to GPU compute, first to variable refresh rate, first to FP16x2 performance
AYYMDPOORFAGS here are all uneducated AYYMD asskissers and asslickers unable to think beyond worshipping a shit company that is dying from bankruptcy
Juan Gonzalez
...
William Morgan
>Now the question remains how buffed up are the individual shaders in Vega compared to Fiji? They almost have twice the transistors dedicated to them.
I actually thought this was because they're buffing up FP64 again, but it's not the case, Vega 20 is apparently the FP64 chip, not Vega 10. So the extra transistors go to pure single precision
Josiah Powell
>Nvidia only bought 3DFX patents and IP, they didn't buy the company
Cooper Jones
That's different though. Nvidia doesn't have a history of housefires like AMD does. 250W TDP of Nvidia doesn't actually mean 250W like it does for AMD. AMD's is worse.
William Diaz
Lol? Here's AMD last 250W TDP card, the 7970, everything else high end has been 280-300W
Ayden Foster
>Nvidia doesn't have a history of housefires >what is fermi I thought you have to be 18 to visit this site
Charles Gutierrez
And here's the 250W Pascal (not XP)
Kevin Nelson
Wait. So even the 500W PSU is excessive?
Caleb Jones
Nvidia only bought their ideas and inventions, they didn't buy the company
Easton Reyes
First of all, TDP stands for thermal design power, not power consumption. Secondly, everyone changes how they measure TDP every generation, it's retarded. And Intel's mobile CPUs are the worst culprits.
Luke Baker
>how do i greentext
Oliver Kelly
Depends if you're overclocking, how much drives and shit you have.
Even at that you'll be fine, but I'd personally always leave some 40% of the PSU unused due to efficiency
Josiah Perry
People also forget to mention how much power the VRM and VRAM use, a good 20-30W for the high end GPUs alone goes to VRM, and another 50-60W for high clocked GDDR5
The GPU dies themselves are insanely efficient, this is why APUs are fucking awesome when they have a shitload of shaders in them.
Charles Brown
Op = fag?
Liam Long
Slightly less for the VRM, but you're right.
Kevin Wilson
All of this means nothing because apis are dinosaurs besides vulkan
Jack Turner
Remember seeing a chart recently what showed that ATI/AMD have historically a lower power consumption than Nvidia. That chart even had a extra 1000 mark just for one Nvidia card.
Nicholas Cooper
>Nvidia doesn't have a history of housefires
Levi Lewis
>Nvidia doesn't have a history of housefires How new are you?
Lincoln Harris
>what is fermi
Dylan Edwards
Nvidia invented housefires.
Jace Allen
>lets look at high level measurements without concern for lower level details which developers will have to concern themselves with to gain substantially. It's far more important for mainstream games on pc to consider the driver overhead right now. If your gpu architecture facilitates the driver to work more effectively you can see good performance. It's rare to be fragment shader bound.
Jayden Moore
>you merely adopted the fire... i was BORN in it
Bentley Martinez
>GCN will be deprecated and driver development won't continue Well, it was a wild ride PowerVR Kyro were launched for PC's in 2002 ATi Stream and the Close-To-Metal initiative predate CUDA Also, BrookGPU predates both Stop lying dumb Pajeet /thread Nvidia Pajeets and Sup Forumsfags love to lie They also got most of their employees Totally not buying the company, right? Variable refresh rate was already part of the eDP VESA standard
Isaiah Ramirez
Vega ia still a 16 wide vector arch, GCN is alive. Though there's really no performance to squeeze out of anything older than Polaris now.