Ok time for summary:

Ok time for summary:

Vega is the biggest fundamental architecture change since they switched from Terascale to GCN 1.0
The new NCU means a massive boost to efficiency in the shader core and a much easier ability to fully saturate the GPU with work.
The biggest problem with Fiji(furyX) is that its 4096 shaders didnt mean very much because of large bubbles in the pipeline and large ammounts of shaders idling and doing no work each cycle.
Even with a low Level API like Vulcan which is best case scenario the FuryX is still not running with all cylinders firing because Vulkan only solved the Driver overhead problem. There is still bubbles in the workload because of the fundamental design of the GCN architecture itself.

AMD is switching from standard 4x16-bit blocks of a normal GCN CU(compute unit) to This new design of a variable 8+4+2+2 block and either four of these or a mix of these and standard 16-bit blocks in this new NCU(next compute unit)

They could not Do this in the past because they did not have the technology to build a Hardware Scheduler and Memory Scheduler Complexed enough to run this kind of architecture.

We have test samples of Vega with NCU's disabled and with very low clockspeeds and its beating a 1900mhz overclocked gtx 1080 in DOOM. Now there is a gain from using Vulcan over DX11 and OpenGL for AMD but thats on the Driver Overhead side not on the fundamental shader operations side. IT just means the CPU is feeding draw calls to the GPU at peak efficiency.

Alot of people dismiss the test because its vulcan but they dont know what they are looking at. The Vega sample is running FIJI 1.2 drivers Which means the instructions are running through the Vega NCU's like they are standard fixed 16-bit blocks like you have on a FuryX card.

So you need to look at it from this perspective.

This New architecture is more advanced than Maxwel/Pascal. You are going to see unreal performance from Vega when its finally released. Poor Volta

Other urls found in this thread:

archive.rebeccablacktech.com/g/?task=search&ghost=&search_text=Vega is the biggest fundamental architecture change since
twitter.com/NSFWRedditVideo

honestly no idea what you just said but i hope amd succeeds so nvidia stops jewing everyone with gpu prices.

you won't get real vega performance until like 3 years later when games actually start to use the new features

hi uh im new where can i make an account?

>archive.rebeccablacktech.com/g/?task=search&ghost=&search_text=Vega is the biggest fundamental architecture change since
fuck off

nobody cares nerd

no shit!!

we're here to talk about raisin and all its cornz!!! yeaaahaaa gonna build databases while playing GTA5 and live streaming the whole affair to no viewers (but thatll change once Im rocking raisin with 4 more cornz)

thank you based amd u changed my life and will make me rich

my first treat will be hiring sammy cornz to play at my barmitzva

NO idea what any of this shit means. But I will just stick with my i5-4460 and 1050 ti for now

Nice copypasta, mate.

my wallet is ready raja

bless us tomorrow with your glorious vega news

those are some thicc cables

I want Wega to succeed. If it doesn't I'll be stuck with Nvidia forever.

How many fucking amps is he delivering to his ears? Wtf are those megawatt THICC cables.

>128 32-bit ops per clock
O_O

nice pasta, retard.

The point of the 8+4+2+1+1 SIMD blocks is presumably so that a 4x4 px shader fragment can be broken into 2 sub-fragments of any size (16+0, 15+1, 14+2, ... 1+15 pixels), which means that most cases of dynamic, non-coherent branching won't waste a shit-ton of pipeline slots. Traditional SIMT allows branching but basically just masks off instruction computations for SIMD lanes that aren't taken, wasting a bunch of efficiency.

Vega/NCU could be great, but we really don't have much to speculate on so far, and this stale analysis is just pretty pathetic.

>Which means the instructions are running through the Vega NCU's like they are standard fixed 16-bit blocks like you have on a FuryX card.

Does this mean I can literally do twice as many 8bit floating math twice as fast?

But how often is that used? 16 and 32 is more common. If it doesn't make 32bit float faster, a lot of games won't see performance increases.

Okay 8bit is used a lot. 8bpc colors. But then 10bpc becomes standard.

>This New architecture is more advanced than Maxwel/Pascal.
Well duh. As much as I plan to buy Vega, Maxwell and Pascal were only stop-gaps to Volta just like Polaris was a stopgap to Vega.
Pascal wasn't even on the fucking roadmap a few years ago.

I'm fine with this since it's supposed to launch getting 60fps minimum (MINIMUM, not average) in lots of games like Doom and SWBF at 4K resolution.
That's where it's starting, and it'll get even better.

I bought my 7970 and it was getting like 54 fps in crysis warhead at 1920x1200 but driver updates over the years increased that a little bit and newer games utilized the card much better so I'm still getting 60fps+ at 1920x1200 in virtually every game now days.

So it's basically 90-110% the performance of the 970. But I got it 6 fucking years ago lmao. More future proof than my i5-2500k is that I can't wait to upgrade. Yeah the CPU okay, just not as great as the GPU. Wish I had gotten a 2700k.

But will it output vga? I refuse to buy a card I can't use for my crt And fuck off with your active converters, if I wanted lag I would use an lcd.

Pajeet, the absolute madman.

Prince Koduri is ascending to godhood.

its not about amps, its about collision free electron transport for pure, smooth sound


its really somethin i cant even explain until you've built and tweaked your rig for hours and hours, spent thousands of dollars and meticulously arranged and cleaned and dusted your listening quadrant for that 4 minutes of heaven. call me maybe literally brings me to tears when i listen on a similar setup

...

...

VGA to DVI is an active converter?

>Shilling for a GPU that has to cherry pick its benchmark in order to look 5% faster than a stock 1080

>Completely ignoring Titan X and 1080 Ti performance
>Completely ignoring the fact Pascal is a filler release and year or year and half after its release you will get Volta

Ryzen worked because of certified shit wrecker. This Pajeet can't ever compete.

Nvidiots on suicide watch

I thought Vega was also supposed to have double the half precision float ops per cycle? But it only has roughly double the TFLOPs of an RX480 with double the CUs. What gives?

With double the CUs it should have quadruple if it has double the half precision ops per cycle as those GPU TFLOP measurements are typically in half precision.

The SWBF demo they showed earlier this week seemed to show the Vega flagship card has at least 95% the performance as the Titan X Pascal.

Is Frostbite engine too cherry picked?

no, it's just good engine that doesn't choose hardware to run good on

Yes, yes it is.

Anything Dice related always had a heavy AMD flavor to it. They're one of the very few companies that actually give enough of a damn to implement Mantle/Vulkan support in their games. Nobody gives a fuck about your cherry picked shill benchmarks that are 20-30% ahead of your average round up performance.

5%=20% now, okay

SWBF has neither Mantle nor Vulkan support

They co-designed mantle

BF4 had mantle

thi4f also has mantle, what point are you trying to make?

BF4 isn't SWBF, you fucktard.

annnnnnndddd he's listening to it using a fucking Mac as the source. Talk about useless

>want to build a new rig next month
>AMD has a new gpu coming out!
>won't be here until summer at the earliest

guess i just get a gay old 1080 ;__;

BF4 and SWBF use the same game engine (Frostbitty). It's part of EA's strategy to cut (((costs))). Same engine, every fucking game they make

1080 uses gddr5x and it doesn't even throttle, vega has to use hbm. the die is fucking huge as well, it's not going to be cheap. vega is DOA.

he deosn't knwo about wasapi
there is a myth among audiophiles that mac doesn't remix the source, oh yeah it does, less than direct sound but it does

The newest gen of cards all have dvi-d instead of dvi-i, so yes they would require active converters for vga.

just run your IGPU for a-wait, if you're going Ryzen they don't have that do they? Well you're in a very interesting situation my friend

>nshitia shilling

...

>double the CUs.

4096/2496 == 2 ???

>demo they showed
Source, sound fake, everyone already knew Vega is below 1080.

>Waitingfags

Pathetic