Well

techpowerup.com/237211/amd-enables-vega-crossfire-with-upcoming-17-9-2-drivers-over-80-scaling
Let's get this thread goin', mateys. 80% CFX scaling with a simply driver update, while XDMA. This is only the beginning of the unstoppable momentum. /Ve/Ga/ confirmed for being "video card equivalent of Zen". AMD's not even funny at how strictly they follow the "fine wine" dogma. Down to the letter. Expect this shit to become truly GLORIOUS in ~3 months, at worst. nGreedia is on a literal suicide watch.

P.S.
youtube.com/watch?v=ZVaJH8pWnWk

Other urls found in this thread:

sys.Sup
radeon.com/_downloads/vega-whitepaper-11.6.17.pdf
youtube.com/watch?v=JLqmb-kGM5Y
twitter.com/NSFWRedditVideo

sys.Sup Forums.org/g/imgboard.php?mode=report&no=62548592

for mobile users

...

kek
>also sage

>he wants to run two 400w cards in parallel
>just to get somewhat better performance than a 1080 Ti
>and only in the few games that will be optimized for it

>This is what noVideo shills actually made to believe so that they don't change sides and stay the brainwashed milking cucks that they are

they added mgpu support for vega, big fucking deal
shill pls leave, there will NOT be any magic driver that suddenly improves performance by 100%

>there will NOT be any magic driver that suddenly improves performance by 100%
Oh, really?

...

idiot

>magic driver
Core uarch features well described in the white paper are not magic. Primitive shaders will finally free GCN from being front end geometry bottlenecked at 4 triangles per clock and promise to *at least* double Vega's polygon throughput.

Anyone who doubts me should free feel to go read the Vega whitepaper for themselves at radeon.com/_downloads/vega-whitepaper-11.6.17.pdf

Big fucking deal AMD. Until you release drivers with primitive shaders active, VEGA will continue to be an absolute halfbaked joke. You did good with Ryzen, get your shit together mang.

>You did good with Ryzen, get your shit together mang.
They are pretty busy snorting cocain while shipping Vega to Apple.

>Fury X
Yeah, I agree. You are definitely an idiot.

this fucking shill every fucking time...
jesus christ man, lay off the shilling for once, if they will deliver - they will deliver, if not - they won't, no need to constantly post the same promises over and over again

and before you make a stupid assertion, you stupid little shill, yes, AMD did hype vega (poor volta), just like they hyped up the Fury X (overclockers dream)

I think the fire that your Radeon started damaged your brain

>posting whitepaper data is shilling
?

Don't worry, kid. I'm on GLORIOUS HD 6850 right now, no fires coming from that one that's for sure.

(you)

>resorting to (You)posting
?

I posted information and a link to the whitepaper to specifically annoy you, since I recognized you.

The drivers are a little late because the driver team is basically re-writing half the D3D rendering pipeline in drivers, but boy things are going to get interesting when they finally finish them. Haven't you noticed they've delayed AIB cards until the drivers are uarch feature complete?

>the same response every time he gets called a shill
?

There's a reason no serious reviewer ever recommends Crossfire or SLI unless you have more money than sense, it's complete garbage.

>posting whitepaper data is shilling
?

I hope you're right man, I really do, because I want vega to be great
but something tells me that it's going to turn out the same way the fury x did

>but something tells me that it's going to turn out the same way the fury x did
Nah, at least Fiji launched feature-complete (because it was basically a double Tonga).
Vega is very different to simply throwing moar ALUs.

times change
it used to be alright, though not great, but in recent years it just didn't get support very well on different ends
we shall see how this update turns out

Fury wasn't missing core uarch features in the drivers when it launched, and was always doomed to under perform precisely because of the limitations of its front-end only being able to process 4 traingles per clock, which wasn't nearly enough to feed its 4,096 shaders. Primitive shaders is literally specifically targeted at fixing this exact issue, but it's time consuming to implement because it literally involves replacing large chunks of the DX11, DX12 and Vulkan rendering pipelines in the drivers.

It's AMD's fault for not enabling a core uarch feature of their gpu before shipping. You send out an incomplete product, you deserve to be called out on it.

would it be a good idea to try to sell my fury x and get a vega 64 in preparations for that? because if those drivers turn out to be real, then the value of my fury x will go down the toilet

And everyone is shitting on them.
At least it performs great at CAD.
>because it literally involves replacing large chunks of the DX11, DX12 and Vulkan rendering pipelines
>because it literally involves replacing large chunks of the DX11, DX12 and Vulkan rendering pipelines
>large chunks
It literally replaces everything but Pixel Shader.

Yes if you have a backup card.

no backup card, I'm thinking about selling the fury x and getting v64

I wish AMD stopped giving consumers literaly trash from the wafers.
V56 is better because you can flash it with 64 bios for 1.35V HBM voltage.
500-something ALUs won't make big difference.

Calling AMD out for shipping Vega with the drivers half finished is not the same thing as claiming that the disabled uarch features will never EVER be implemented.

I'd at least wait until AIB cards get released rather than buying a reference Vega 64 now before prim shaders hit the public drivers and we see if RTG has managed to do what they claimed.

>It literally replaces everything but Pixel Shader.
It still uses the fixed function geometry engine and doesn't replace the pixel shader, so I thought "large chunks" was a fair description.

>v56
well, in that case I'll hold off until those drivers drop, because you can get a v64 for less than a v56 here lol

>It still uses the fixed function geometry engine
Only to raster tris. Everything else will be moved onto the shaders.
>doesn't replace the pixel shader
Pixel shader is the only DX9 thing that is still actually efficient.
Russia?

no, Lithuania

Same shit.
Slavlands are rightful nVidia clay.

I would get rid of it, the Fury X is a shitcard. It performs slightly better than a RX 580 but consumes much more power and has only 4 GB.

>Only to raster tris. Everything else will be moved onto the shaders.
That's not how I understand the whiepaper. From my understanding prim shaders replaces all the steps that were already being done on the CUs anyway while still using the fixed function geometry engines for surface shading, tessellation, fixed function culling (PDA) and primitive assembly. This image from the whitepaper seems to indicate pretty clearly to me that the parts shaded in grey are the same between both paths.

>Same shit.
no

>fixed function geometry engines for surface shading, tessellation, fixed function culling (PDA) and primitive assembly
>surface shading
>shading
That's the subset of primitive shaders that replaces hull & domain stage. The only things left in FF are vertex assembly, raster and scheduling.

Can you please link me to something that confirms all of this? I don't see anything in the whitepaper itself to confirm that the NGG fast path is bypassing most or all of the fixed function geometry engine, and the diagram I posted clearly indicating that the fixed function primitive culling of Polaris still being used in both cases seems to me to support the understanding that the shaded grey areas are the same fixed function steps being used in both cases.

Is there some source floating around that actually explains that prim shaders don't use most or all of the fixed function geometry shaders that I've missed?

There are no "fixed function geometry shaders".
Shaders are shaders. Vega kills usual shader stages in favout of novel and efficient Primitive and Surface shaders.
In these they merged several shaders stages and added fucntionality to them that was exclusive to FF units before.
>Primitive shaders can operate on a variety of different geometric primitives, including individual vertices, polygons, and patch surfaces. When tessellation is enabled, a surface shader is generated to process patches and control points before the surface is tessellated, and the resultingpolygons are sent to the primitive shader. In this case, the
surface shader combines the vertex shading and hull shading stages of the Direct3D graphics pipeline, while the primitive shader replaces the domain shading and geometry shading stages.
I.e. the only FF points are the actual tesselator, primitive assembler and scheduling.
PDA accelerates tesselation, that's why it's even there.

CFX hasn't been gabrage since R9 290X, faggot.
Because after Radeon fully moved XDMA quality of CFX frame pacing-wise and quality of utilization GPU scaling-wise have both improved drastically. Hell, RX 4xx, for one, had almost-perfect scaling (it was like, what, ~92% for two GPU systems, if my memory is right?). noVideo's SLI, however, was a situation played completely in reverse - previously their SLI was better than CFX, but after R 2xx/GTX 9xx series times passed the overall quality and stability of SLI dropped significantly. So to sum it up - only SLI is itter garbage nowadays, while XDMA CFX works very well.

>while XDMA CFX works very well.
Only IF it works.

Fuck off and sosi hui, mraz. I'm of Russian god ruler race of mankind and I've been on team Red's side since after the last truly worthwhile and absolutely flawless GeForce came out, the GTX 285. It was nothing but a major letdown after a major letdown from the Green side since then and all the way up until nowadays. Radeon, however, let me down literally only once so far, with HD 7xxx series.

>GT200b was good
Wtf are you smoking my dude.

That wasn't the point.
The stupid motherfucker like a total idiot that he is had proclaimed that modern CFX sucks just as bad as it was in the distant past. I've put him down by presenting facts on the current state of CFX and SLI.

Maybe I'm using the terminology wrong, as I'm still learning about GPUs, but as I understand it the fixed function geometry engines in GCN (as of polaris) include 4 things:

1) Input assembly
2) Tessellator
3) Fixed function primitive culling
4) Primitive assembler

My understanding of the whitepaper is that the NGG fast path still uses all four of those things and replaces only the steps that were being done on the CUs, with the finished results then going to the rasterizers. Is this not correct?

>Is this not correct?
Not fully. Don't forget what DSBR does.
FF units are supposed to do fuckall when it works.

I never said entire GTX 2xx was good. Only the ABSOLUTE PERFECTION that was GTX 285, and in particular MSi's GODLIKE "SuperPipe N285" (the VERY FIRST "Twin Frozr"). It was SO GOD-FUCKING-DAMN AMAZING that I'm literally having wet memory dreams about it even these days. That shit was VERY TIP OF THE MOUNT OLUMPUS, ABSO-EFFING-LUTE CREAM CROP OF THE CREAMEST CROPS. This was THE very pinnacle of PERFORMANCE and QUALITY for GeForce. It all started going downhill fast right after that exact point. GTX 285 was fucking DIAMONDS-HARD RAGING BONER tier of a card and there was NO such a card after it made by noVideo FOR ALL THESE YEARS SINCE THEN. I'm absolutely and completely unironically serious when I say that I can physically start jerking off vigorously for GTX 285 SuperPipe N285 Twin Frozr I of MSi's. The shit was UBER-CASH so good. No amounts of titfucks and blowjobs my gf ever gave me could match the sexual experience I've had with GTX 285 back in the days.

>No amounts of titfucks and blowjobs my gf ever gave me
We're all little girls here, user-kun.

Doesn't DSBR only affect the rasterization step? I wasn't under the impression that it changed how anything prior to rasterization was done?

It splits the screen into grid of tiles looooooong before raster occurs. Now apply primshader to that (including deferred vertex attribute computation).

>weebshit
KYS

Anime website, gopnik-kun.

Hasn't been that in a decade or more

youtube.com/watch?v=JLqmb-kGM5Y
Except no, it's still not good. There's literally only one game they tested that had consistently great scaling. It's incredibly disturbing that Sup Forums continues to shill Crossfire despite the vast majority of internet communities telling people to avoid it, I remember how hard Crossfire 480's was shilled here when AMD bragged about the Ashes benchmark.

Anime website.

Ebin. Now get off my containment board nigger.

Anime website.

how is the scaling on PCIE gen 2 with R56?
Is it gimped a lot?

>This shit again
When will you newfags learn that difference between PCI-e 2.0 x16 and PCI-e 3.0 x16 is literally in the margin of error? And same is with x8. To see ANY significant drops AT ALL you have to put a PCI-e 3.0 card into a fucking PCI-e 1.0/1.1 slot with x8 wiring.

not actually newfag been here sine 2008.
Anyway I have 775 mobo that supports crossfire in x8 thats why I was asking.

I'm not saying you're a newfag on this site, but a newfag on hardware if you're sincerely asking such questions. Because this shit's been known for more than a decade now.
And it's exactly same situation with "DDR3 VS DDR4" RAM comparison. Literal fucking margin of error, unless you're APU'ing. APUs are LITERALLY the only things out there which seriously benefit greatly from faster RAM.

>Yeah, I agree. You are definitely an idiot.
Why? Fury X was a way better card for its time than Vega 64 is

>LGA 775
>CrossFire
Wew

never understood this pic.
nvidia made card, thought it was 3.5gb mem but was actually 4, sticker to cover it up or nvidia made card at 4gb selling point but actually 3.5 gig,
amd was selling an 8gb card disguised as a 4gb or amd made card that they thought was 8 but actually 4.

>thought it was 3.5gb mem but was actually 4
noVideots STILL believe it was 4, oh wow

can be moded to take quad core xeon.
so that and r56 when they come cheaper will be good enough.
Also I have two 1200W Gold PSUs just sitting dong nothing

...

>LGA 755
>CrossFire with late 2017 GPUs
WEEEW

whats the problem?
The mobo also supports DD3 to 16 gigs.

DDR*

>What's the problem?
Only that you're a complete schizo of a masochist, I guess.

At what point do you see a failure of the system?

The CPU can happily handle the GPU load since its a server CPU unlike the garbage i3 people are having in modern rigs.
DDR3 RAM up to 16 gigs,as the user above me said there is not much difference to DDR4.
Put an SSD on it and it will be good for another few years.
I dont have problem with power draw since I am ex 480GTX user.