More sandbagging?

More sandbagging?

AMD had claimed for the longest time that Vega had 4 Shader Engines, with a 2.75x increase per clock in geometry (ROP) performance over Fury X.
With a 1600MHz instead of 1050MHz clock, that's 4.19x increased geometry performance.

Well they showed a new die shot yesterday, and it's suddenly different.
There's two ways to interpret this:
>They still consider it 4 shader engines, but each one is split in two groups, and they get that higher ROP performance per clock partially through this splitting up
>It's actually 8, and they were sandbagging
If it's the later, Vega actually has 8.48x the polygon drawing performance at 1600MHz as Fury X does at 1050MHz.

Also for the retards:
This doesn't mean it'll be 419% or 848% total better performance than Fury X. It just will be in raw polygon drawing performance. And lower polygon drawing performance tended to hold GCN back as they were more compute heavy. Good at screenspace effects, less good at lots of polygons and useless amounts of tessellation.
But 175%-275% perf is not unlikely at all.

Other urls found in this thread:

youtube.com/watch?v=owL_KY9sIx8
anandtech.com/show/11002/the-amd-vega-gpu-architecture-teaser/2
optocrypto.com/2017/06/05/samsung-chg70-first-hdr-monitor-freesync-2/
anandtech.com/show/10243/qd-vision-color-iq-and-the-philips-276e6-review/2
twitter.com/SFWRedditImages

really makes you think

it was the raw throughout that holded back the gcn their polygon drawing was fine

are they giving shader engines the bulldozer treatment?

further analysis:
It appears it could be the geometry processors and ROPs as the wide rectangles at the top and bottom of the 8 shader engines.
So it would appear the instructions are coming in from the outside, then finishing down into the middle, instead of the top-down of GCN.
It also appears there are 2 geometry processors and ROPs per shader engine.

But.. again that doesn't make it clear whether they consider this 4 shader engines or 8.
I do think they could be considering this as four, but with the data splitting up into multiple geometry processors on either end for whatever reason.

It also looks like the global data share is on one side, and L2 on the other side, and there is over double the area for asynchronous compute engines and the global data share is fucking massive.
Bottom is obviously memory controller.
Top right appears to be hardware accelerators.
The middle between the Shader Engines... I have no fucking clue.

Na it was not enough ROPs for the compute that meant lots of CUs would be idle as there was much more polygon drawing work needed compared to compute work.

More like the Zen treatment. They can more easily cut this down for mobile and smaller GPUs.
GCN was fixed at 4 shader engines and you had various degrees of lopsidedness with ROPs and CUs.
Same reason you get issues where the RX580 is hardly better at 1080p than the RX570, but is much better at 1440p in screenspace heavy games.

amd's strategy is to clearly blindside their competition

keller already shitstomped intel and theyre in a complete panic

amd knows exactly what failed with fiji

theyve poached the best parts of maxwell/pascal and are about to get a comparable performance from FLOPs with respect to nvidia

Oh. It could actually be the geometry processors and ROPs in the middle, and the skinny rectangles above and below them could be the async compute engines.
That actually makes more sense because the CUs should be short and wide.
And looking at it that way, I do see 4 pairs of geometry processors+ROPs.
But if that's the case, the gap between the row of CUs (looking at it turned 90 degrees) makes no sense.

it was the culling process that was bottlenecking gcn
they literally didnt had one for a long time and tried to force everything to pass via the ACE to parallerize them so that they can gain much perf
didnt really end up well for them

infinity fabric will be used on vega..
i think that amd will clock the memory lower than expected just so that they showcase the full potential of it

So did this asshole buy all the Vega from AMD? Is this why we've been waiting?

its not the first time tho..
apple always buys the higher bins every single chip they can get and its top notch they will pay insane money for it

Yes. Lack of culling really hurt Fury. But more polygon thoroughput on top of culling doesn't hurt.

amd always have multiple lineups of preformance chips

Aren't the higher bins not saturated until later in the production? I mean yields improve over time right?

he is fighting with powerpc and intelligent right now so he probably will run to amd lol

lets hope he gets a apu with a seperate igpu and hidpi screen with display port and freesync

>wishful bsd dev

they used the same technique as ryzen we dont know the yields but im pretty sure its way too high

Dont regret the wait now. Shouldve never doubted AMD

I just found out that nvidia can only use 1 of the wayland compositors

amd supports 7+

once again nvidia wants to create their own shit instead of a standard protocol

just like fucking egl
reeeeeeeeeeeeeeeeee

Yeah and? Its still slower than a 1080

>reeeeeeeeeeeeeeeeee
Just sell it second hand.

>Yeah and? Its still cheaper than a 1080
>Yeah and? Its still freeer than a 1080
>Yeah and? Its still less gimped than a 1080 in 2 years

Fury X at 1600Mhz can't be slower than a 1080.

Stop shitposting.

Didn't this happen with Maxwell, splitting the SM into smaller units for better utilization?

youtube.com/watch?v=owL_KY9sIx8

this looks really promising. Vega will deliver.

>More sandbagging?
I doubt it. If they had something impressive they'd show their hand by now. Simple as that. They have nothing to gain at this point by Sandbagging. VEGA is going to disappoint you if you think AMD are still hold back.

>They still consider it 4 shader engines, but each one is split in two groups
This how they arrived at the 2.75x increase in the first place obviously. Pulling an extra 2x out of your ass is pretty desperate.

It's a cool die shot though, you can see all the space dedicated to the infinity fabric logic pretty clearly.

If it has the same tflop/perf as Tahiti, it will be a monster.

>they'd show their hand by now

No, because they learned their lesson on what happens if you over hype something with Ryzen.

You meant Polaris and Fiji.
If Vega is half a good arch as Zen is, it would be a miracle.

Vega is what Fiji failed. considering Fiji's perf after seeing Vulkan-DX12 and is LN2 overclocking numbers, I am very hyped for Vega.

Someone fire up paint and mark the hardware components there, please.

What's that between the shader array? Looks quite significant in size.

But it's still only 64 rops, bottleneck....?

see

Is this design more scalable?

Has AMD got another winner on their hands? Is this their RV700?

I want release dates and prices

July.
500$+ starting.

>Is this their RV700
it seems so.

You telling me low end Vega is 500?

What lowend Vega? Thats months away, you got big vega and bigger Vega now.

we don't know the line up.
we know there is 56 CU Vega (big vega) and 64 CU(bigger vega) Vega thanks too Apple. no price, nothing.

So and is pulling a fury/Titan meme release? We're they rebrand a third time for everything else?

I feel this is more Tahiti than Terascale, Tahiti was really good, could OC like mad, no bottlenecks.

im pretty happy with all their offerings but i hope they have some lower power consumer stuff with aes and iommu for muh servers

intel only cares about the enterprise

????
Where's the AMD BTFO DEAD BANKRUPT line?
No..No. You're not are you.
No way.
No way, get fucked.
No way get fucked cunt, fuck off.

You can't be seriously having a serious GPU discussion on Sup Forums of all fucking places.

Fuck off fag, this thread is pure cause summers fags are asleep

agreed
vega vulkan is going to be great

Hold me as I weep tears of joy.

How does geometry throughput of polsris/fiji compare to Pascal? I think I saw techreport doing the tests, but fuck me If i can find em while at work on a phone.

bask in thine glory for it is short lived

inb4 it ends up like other amd cards with high theoretical power but it will still be shit at gaymen

Around two to three times lower.

And so summer awakens

>trying to interpret anything from this """"""die shot""""""
uhh
>AMD had claimed for the longest time that Vega had 4 Shader Engines
no they didn't
in fact they basically said that it would work over more than 4 engines
see: anandtech.com/show/11002/the-amd-vega-gpu-architecture-teaser/2

AMD just needs some good value chips to retain marketshare, it can curbstomp Nvidia once it gets enough money from their CPUs

Yeah. I don't understand when people say AMD support on Linux is bad. Maybe if you're on any early version 3 of the kernel or earlier.

Now days dies usually sensors on them to easily test and bin themselves very quickly and easily.

Even a Fury X with a 50% clock increase beats the 1080 by a lot. Vega is much more than simply just a 50% clock increase.
Retard.

A lot happened with Maxwell. Maybe. idk. I don't follow Nvidia architecture much because they are so secretive and you often don't figure out features until YEARS later of reverse engineering.

Yes, I disagree with his analysis, if you read the OP.

IPC per ROP is 275% higher. Also don't know if it really is 64 ROPs.

Not really. Halo products sell the midrange. Consumers are dumb.

Way too much arch changes to gauge performance off a 1600mhz Fury X.

These frontend changes can be anything from 10% to 40% increase in games alone, the latter for particularly shitty ones lupike Fallout and Dishonored

No one is doing so really.
It's just the baseline minimum performance increase without arch changes would be ~50% over Fury. But obviously it's higher than that.

The cherry on top would be if it can actually overclock.

>they learned their lesson on what happens if you over hype something with Ryzen.
How is saying a 40% IPC increase but delivering 52% overhyped?
Unless you're one of those people that listened to the shills claiming the world about it just so they can pull this trick, that it was 'overhyped and didn't deliver'.

Well, Samsung shat out a very impressive quantum dot freesync monitor a few days ago, makes me think there's some meat to Vega.

Haven't most of Samsung's monitors these days been FreeSync supporting?

optocrypto.com/2017/06/05/samsung-chg70-first-hdr-monitor-freesync-2/

WHO WAS IN THE WRONG ?

WHAT DID THEY MEAN BY THIS ?

REALLY MAKES YOU THINK

Average quality though.

This one is 144Hz, quantum dot, VA panel with 3000:1 static contrast, and freesync 2, meaning HDR10 and 125% sRGB coverage.
If it was 4k, it would have been literally perfect.

And will probably cost a good $700 at least.

Wat is this quantum dot meme

Doesn't it still have black shift issues since its a VA panel? That stopped me from buying a CF791.

Personally if I had to choose between 1440p with that contrast and HDR and 4k, I'd pick the contrast any time of the day, most 4k high refresh screens are still around 1000:1

>175%-275%
Hnnnnggg, give me some budget Vega cards for that sweet altcoin mining

Wait for tftcentral reviews.
I don't think they'd be advertising it as a premium gaymer monitor as is, and IPS has its own issues.

user what altcoins are worth it?

I don't know any current ones, but there are always coins worth mining. I sold my mining rig a good while ago, I now only trade crypto, but I'm looking into mining again.

You can hype and use fancy marketing crap as much as you want, it still won't change tje fact that it's barely faster than a 1070 a year later

No, it's literally RV700 2.0, this time Pajeet made Fiji actually work.

No, it's probably Q4 2017-Q1 2018 for Vega11.

Hello, fuccboi. AMD gave you the die shot explicitly to shut up niggers like you.

unfortunately those fuckboys will be chimping out until we see vega.

then they go REEEEEEEEEEEEEE

You can keep crying as much as you want.

So what's the term 'sandbagging' in this context.
>4 but actually 8
Sounds good?

It means presenting something worse than it is, probably to slow down the competitors response time.

A good example is Zen, they promised 40% IPC, we got 52%.
Sandbagging.

Good question, what is it? Sounds like a marketing gimmick

Its just the name of a specific type of pixel structure used in the display.

anandtech.com/show/10243/qd-vision-color-iq-and-the-philips-276e6-review/2

Thanks. I was confused as in today context it's to stop flooding waters. so comes off as preventing the inevitable.
Like they're lying to themselves that everything going to be alright.
If anyone wants to drop origins of the phrase you're more than welcome.

It's exactly that. Everyone assumed Vega still had 4 geometry engines, while in reality, as seen in the dieshot, they went for more balanced approach.

Thought so. More is usually better. Any good resource that would help the layman look at dieshots and understand what they're looking at. Lurking moar can only get one so far.

My guess it's alluding to using sandbags as jettisonable weight. So sandbagging would be intentionally downplaying upcoming products so you can then "jettison" the "weight" holding you back and shoot ahead of the competition, taking them by surprise.
That's just my understanding though

This tactic won't work much longer, Nvidia knows what RTG is capable of, and they're not slowing down.
Unlike Intel who has their head so far up their own ass they only noticed after AMD ate their entire SKU lineup overnight.

That's a very good explanation and also reverts back to the beginning of the technology era where the balloon was invented. I'm pretty sure I've herd that expression used in papers at that time.
Not that I'm of that age. I've researched many things.
Often turns of phrases can be deduced with logical thinking.
Thanks again.

>they're not slowing down.
Ye, they bolted fucking ASIC on oversized die in desperate attemp to get into datacenter market.

Fixed function and specialized hardware is always a good thing.
Pretty much everything in large scale is moving from general purpose to specialized hardware.

That's why everyone will move away from GPGPUs to FPGAs and ASICs. GV100 move reeks of desperation.

>Unlike Intel who has their head so far up their own ass they only noticed after AMD ate their entire SKU lineup overnight.
Their management, yes. But please don't label their engineers as being complacent in status quo. Well, not because they don't strive for better.
We're living in times where such a company has political persuasion. And aiding their products gives them power.
That's not to say a smaller company does not have the ability to persuade government. It's just some do it for worse outcomes.

Intel has one good engineering team: the dudes behind EMIB. That's it.

The window is already gone. Ethereum will be mined up before the end of the year. Card prices skyrocketed back up with the BTC boom this year.

Buy a 1070 right now today and it might pay for itself. A vega card would never pay for itself. Frontier edition will be 1000USD, 30 MH/s, 250W calling it now.

Their NICs are good and Optane is really good if it wasn't so unholy expensive.

They don't want Nvidia to start selling Titan Xp for 400$ before Vega release.

That's why they are pretty much airtight about it's performance except computational applications.

>Their NICs are good
Ah, yeah.
>Optane is really good
Fuck no it dies overnight. And we're talking R&D.

Optane was codeveloped with Micron though.

>away from GPGPUs to FPGAs and ASICs

GPUs and FPGAs aren't ASICs?