So the Vega results are in and its shit, its performance is between 1080 and 1070

So the Vega results are in and its shit, its performance is between 1080 and 1070

gamersnexus.net/hwreviews/2973-amd-vega-frontier-edition-reviewed-too-soon-to-call/page-4

Other urls found in this thread:

pcper.com/reviews/Graphics-Cards/Radeon-Vega-Frontier-Edition-16GB-Air-Cooled-Review/Professional-Testing-SPEC
design-reuse.com/articles/41186/design-considerations-for-high-bandwidth-memory-controller.html
twitter.com/AnonBabble

wait for drivers

All you fags are forgetting RX Vega will be watercooled unlike $1000 FE so look for $1500 FE watercooled reviews for a meaningful comparison to what the gaming card will perform like.

PCPer and this review makes it quite clear Vega is not running at it's promised 1600 MHz clock and is throttling and thus underperforming

Vega at it's proper 1600 MHz clock will be GTX 1080 competitive at the very least.

It's running with TBDR disabled (why? Who the fuck knows). So the results are in line with it being a bigger Fiji with wider pipeline.

Dumbfuck gaymur retardniggers need to be gassed.

>When overclocking, underclocking, or even restoring a change to defaults, HBM drops from 945MHz (reported) to 500MHz (reported).

Fiji drivers confirmed

HBM really is a meme.

Imagine what AMD could have done if they put their limited resources into better drivers and arch refinement.

Instead we got Mantle and HBM, fuck raja

wow I guess I'll wait :^)

Vega is not released yet and probably suffering from drivers not optimized for gaming.

>mantle
Literally the best API since forever.
>HBM
Literally the best DRAM since forever.
Lmao what the fuck.

It's suffering from nodrivers period.

That's a first for ati/amd

No, I don't mean bad/unoptimised drivers. I mean uarch features simply now working. At all.

>best api
They paid dice fucking millions to redesign their game around it and it gave +10% over dx11

C'mon user, you aren't even trying

HBM was also rushed out with shitty drivers. According to AMD's justification "the one thing devs want more of, is bandwidth" was just flat out wrong, they want more SP/Cores at higher clocks.
HBM has a huge sunk cost that they will never recuperate.

so?
in a years time it'll be way above those cards
screenshot this post

why is it lower than a demo? this whole FE thing is weird

>a huge sink cost that they will never recuperate
I can say this about everything since R700 days.

Their gpu division saved them from the abortion that was faildozer, retard. Without it AMD FINISHED AND BANKRUPT wouldn't be a meme.

Wrong. Mantle was incorporated into Vulkan and became the heart of DX12. Obviously if you do not leverage the advantages of DX12/Mantle/Vulkan you don't get the "performance gains". You can write the same shitty code in DX12 as DX11, just more work. If you put that work towards optimizing the game, and improving parallelism, using compute shaders, and saturating the wavefront/warp to exploit ILP, newer APIs obviously have a lot to offer. Just because it isn't practiced in games that are incredibly large and unwieldy to port doesn't mean there don't exist inherent advantages to the API. Have you even read the Vulkan spec?

HBM had a memory controller capable of handling only half the bandwidth of the theoretical maximum thoroughput of HBM. HBM isn't targeted towards gamers you absolute dolt. It's literally designed to shrink the size of the card to near mxm sizes. It would do well in texture mapping, and it does. Fiji was limited by the render back end in most games, nothing to do with HBM except when textures got too big to fit its tiny 4GB space.

Also wrong about more cores, unless you're only talking about GPGPU. Fiji was at its limits and so is Vega at 4096 ALUs over 4 shader engines and both dies are very big. You need high res textures loaded live, need to go 4K or 8K rendering realtime? Memory bandwidth is absolutely essential. HBM makes it scalable without needing to put the controller at inane clock speeds.

But then again, why reason with a gaymur :^)

Dice literally got PAID to "leverage the advantages of Mantle" and it fucking sucked. Go look at the benches.

Low level API's are good, I'm not debating that, I'm purely saying for AMD to invest this much into it and get so little back (+10% at best when specifically targeted) is where it all went wrong. they should have used that time to deveop their shitty DX11 drivers that 99% of their userbase wanted. Mantle was a bad business decision.

>HBM isn't targeted towards gamers you absolute dolt.
Stopped reading right there, I'm pretty sure this is bait. you're a retard if serious.

Look no further than Doom Vulkan performance to see why the API is so good.

>HBM isn't targeted towards gamers you absolute dolt.
It really isn't. Believe it or not GPUs can be used to do things other than gaming, it's just another technology that could be leveraged for gaming. The JEDEC chairman even states
>“GPUs and CPUs continue to drive demand for more memory bandwidth and capacity, amid increasing display resolutions and the growth in computing datasets. HBM provides a compelling solution to reduce the IO power and memory footprint for our most demanding applications,” said Barry Wagner, JEDEC HBM Task Group Chairman.
Nothing to do with gaming bud, just saying that it is more power efficient and smaller than previous memory paradigms.

>Dice literally got PAID to "leverage the advantages of Mantle" and it fucking sucked. Go look at the benches.

OK? How can we be sure that it uses all the features Mantle in every single class and piece of HLSL? Reduced driver overhead, parallelized DMA means nothing to you? What about better interoperability with console low level APIs like GNM which run off of AMD SoCs? For the record, DICE also contributed to Mantle a massive deal. It's not like AMD just tasked all of their software engineers on this project.

>Between 1070 and 1080 in meme AMD shilled games
>1070 performance in Nvidia optimized games
>Pretending to be workstation card but only being able to compete with meme prosumer cards like Titan Xp - not actual workstation cards like Quadro.

This card is fucking useless.

AYYMD IS FINISHED & BANKRUPT

AYYMDPOORFAGS CONFIRMED ON SUICIDE WATCH

>>Pretending to be workstation card but only being able to compete with meme prosumer cards like Titan Xp - not actual workstation cards like Quadro.
I'd like to see your source for that.

who would have knew a workstation card performs just as bad as every other workstation cards!

meanwhile lets check what vega FE does on the market that amd create it
pcper.com/reviews/Graphics-Cards/Radeon-Vega-Frontier-Edition-16GB-Air-Cooled-Review/Professional-Testing-SPEC
oh shit 90% of the benches show it AHEAD of titan xp who would have knew!

>no ECC and certified drivers
>workstation
pick one

hbm a priori has ecc moron

>Vega at it's proper 1600 MHz clock will be GTX 1080 competitive at the very least
>1500 dollarinos
Wow what a steal

Vega FE has no ECC and certified drivers. Kill yourself asap.

Actual WX Vega comes later for much moneys.

and is actual workstation card unlike Vega FE

hbm 1 had ecc and data masking capabilities a priori but you couldnt use both of them
hbm2 has ecc and data masking and its enabled through the MRS registers
so next time you try to answer back like a smartass try at least to search a bit before you embarrass yourself further
also yeah which card has certified drivers at their launch? NOT A SINGLE ONE cause it takes time blender just released their CF yesterday

It doesn't matter if HBM supports ECC if its disabled on hardware level you retarded sack of nigger shit with down syndrome.

>water cooling

Enjoy your fried mobo's fags.


TEAM GREEN

oh god dont fucking answer back if you dont know shit please

design-reuse.com/articles/41186/design-considerations-for-high-bandwidth-memory-controller.html

spend some time and learn something

Wait for the dedicated gaming cards next month.

You do realize GPUs draw the majority of power directly from PSU by way of 6/8-pin connectors and only use PCIe for data?
This invasion of Sup Forumsedditors needs to stop.

the source is GN review, it literally gets smacked in 60-70% of the "professional work" by Titan, not to mention it got murdered in blender as well.

really cause pcper review heresays otherwise

actually the score is 7 : 4 for vefefe vs titan in pcper, and they used differnt benches than GN and didnt even include blender bench.

I hope yoúr whore mother becomes paralysed in a car crash you subhuman Sup Forumsermin.

they used spec which by any means is the gold standar on industry

id say its far more reliable than using a version of blender that kept crashing cause it didnt had the yesterdays CF

gddr has ECC as well, so what? just because hbm2 can have them enabled doesnt mean it has it, did you AMD market vefefe as having ECC? no, so it doesnt have it, if it would they would have plastered all over it.

That's irrelevant. Water-cooled components always have a high chance of failure, is AMD offering any warranty on the card? And what about my other components? What am I supposed to do if there's liquid from the GPU leaking on my rig?

anyway its competes with titan, literally its anyones guess in which bench which card can win. If you want consistent performance with certified drivers buy quaddro if you want cheap(by prostandards) fake procard buy titan or vefefe, performance is case by case.

exactly water cooling pc components is biggest meme ever, there is literally 100% chance that at some point your rig will start leaking and you just hope it wont destroy to much of the components.

Are you retarded or just pretending?
In any case, gaming cards will be produced by 3rd parties and they usually make different variants so you can select what suites you.

In reality it's going to do worse than the 1070 in most games, as that benchmark is clearly very AMD friendly. Just look at how well the rx 580 is supposedly doing.

Not him but only custom loops are a meme, closed loops are exactly as fucking described, CLOSED and SEALED.

>GN
I'm not sure I believe them after the shit they pulled with Ryzen.

the fluid is non conductive and even if it was the voltages involved are so low that it's not going to cause damage. The chance of it damage your system is minuscule.

My CPU block leaked distilled water onto the back of the GPU for like two days. Literally the only effect was occasional system freezes I couldn't explain. After I stopped the leak everything worked fine.

yet they still manage to leak, a quick Google will show you the thousands of horror stories involving closed loop.

Well, then it's a good thing VegaFE isn't a gaming card you retarded Sup Forumsedditor.
Why don't you shell out three thousand bucks on a Quadro so you can masturbate to playing CS:GO at 320p on a workstation card?

SpecPerfView 12.1 uses only the software from 2013; Maya 2013, NX9, Creo 2, Solidworks 2013. The WX7100 literally competes with a M2000 in SPV12 but will readily compete with a Quadro P4000 on SPECapc which uses the 2016 versions of the software. Blender should be using the 2.79a version (which is actually OpenCL compliant) not the 2.78 version GN uses in his benchmark.

JUST WAIT PLEASE I SWEAR WE'VE GOT A GAMUR VERSION PLEASE

you lad listen to master bratac and he tells you to know when to shut up

you have no clue what you are talking about

spec updates its suite only when they see that intel is in trouble..

shows why they never thought to change it up untill 3 weeks ago while intel was hacking through the numbers...

Titan isn't a gaming card now?

the target group of it isnt really the gamers but it can play perfectly fine

Titan is targeted to be something in the middle. Not a 100% gaming card but not a true workstation card either.
I'd probably compare it to Radeon Pro Duo as far as target market goes.

KREE SHOL'VA

>invest into expensive meme ram tech
>it doesn't do anything

lmao

Yes let's all be like nVidia and wait until g-ds will gift us new tech.

WE

Oh my god, so the latest AMD turd that got shilled and hyped to hell and beyond has failed to deliver yet again?

I am so shocked! I have definitely not seen this a dozen times before and nobody could see it coming a mile away!

#wow
#whoa

t. Jen Hsun Huang

So being way below year old cards on every metric while not even dreaming of touching the current flagships is what this exciting new tech has provided? Oh boy I am thrilled for this revolutionary tech! It totally paid off!

Yes Huang, you should be eternally grateful to ATi/AMD engineers for pioneering memory tech you're using THRICE.

Seems like those engineers also love pioneering the last place in performance.

At least they are pioneering something.

P O O
O
O

I N
N

G P U
P
U

Nvidia not being retarded like AYYYMD saw that the only benefit of HBM is smaller PCB package for stacking teslas up the ass. AYYYMD being the retarded poojets they are, decided to push it into gaymen market where it delivers zero (0) benefits for now.

Yes, nVidia is more retarded than AMD since they can't engineer hardware for shit.

Nvidia was behind HMC which was better than HBM but more expensive for the time. Enjoy your discount "technology" poojet.

HMC failed. Horribly.

>be AMD
>release card without drivers
How can you fuck up this badly? It would be better for their image if they delayed it.

It's a card for people who want to tinker with Vega as uarch and ROCm platform. Also for gamedevs.

That probably explains why current nvidia offerings are years ahead in performance when compared to amd's while consuming half the energy, right?

More like:
>Be AMD
>Release workstation and datacenter versions of card with the proper drivers for those uses
>Use placeholders for the game related drivers as you continue working on those for the release of the gaming cards
>Expect tech sites not to be retarded enough to benchmark the workstation cards for games using those placeholder drivers without some massive warnings

Then again this probably to be expected. Google and Facebook have together pretty much killed the market for ad funded news and review sites by gobbling up a lion's share of all the ad revenue so the only people left running these sites are the people too retarded to get a job actually making something (rather than writing about something other people have made).

Using PowerVR technique from 1994 is not that hard, and AMD uses it in Vega too.

They should have never released Vega FE for general public.

>salty that JEDEC told Nvidia to fuck off.

AMD themselves compared their card to Titan so everyone expected it to able game also.

>AMD themselves compared their card to Titan
In compute.

Why would you compare your workstation card with glorified gaming card?

lol literally Fury drivers confirmed

Do you even know what a workstation card is? They usually perform even better in games than the equivalent gaming card.
The first Titan one had some non gayming features, every single one after it was a 100% pure gaming card. You are retards who easily fall for the marketing.

Titan Xp is still a fully enabled gp102. Also FE is "prosumer" card, not a WX workstation one.

>Also FE is "prosumer" card
So which is it? Prosumer card like Titan or workstation card like quadros?

It's prosumer like OG Titan. Actual WX Vegas will be announced on SIGGRAPH.

>all this arguing over what Vega FE is or isn't

Just proves what a fucking turd AMD's marketing is for there to be this much confusion.

Let's not talk about Titan X and Titan X and Titan Xp.

well, there goes the dream of having a full AMD build. I guess they can't pull off another Ryzen on the GPU department. It is a miracle they even made Intel shake in their boots. Here is hoping that Navi will be the one.

They can, but it takes time, and money, and software engineers. And since GPU market is much much less profitable than selling EPYCs with absolutely stellar margins, they don't care about it that much to throw money at RTG.

In a way it makes me happy that AMD is finally getting their shit together and focusing on the things that matter. EBYN will surely bring them lots of profits, that is why I hope Navi will get enough budget to slaughter Nvidia.

If Navi is coming 2019, it's already well into development, and extra R&D budget from Ebyn sales probably can't save it now.

Vega will already fight nVidia the moment they write fucking drivers for it. For fucks sake why TBDR is not fucking working who the fuck though releasing not-Maxwell with TBDR disabled was a good idea?
Also if AMD takes at least 20% of the server market and 35-40% of desktop CPU market, nvidia's revenue will be pocket change to them.
It doesn't need to be saved. It needs even better drivers if it's actually MCMed GPUs.

This thread is excellent proof that Sup Forums knows absolutely nothing about gpus and AMD and can only regurgitate what equally stupid people on reddit say.