Is this the power of nvidia? is nvidia sabotaging ryzen?

is this the power of nvidia? is nvidia sabotaging ryzen?

Other urls found in this thread:

reddit.com/r/Amd/comments/64r0hj/thanks_nvidia_for_letting_ryzen_5_look_bad/
youtube.com/watch?v=nIoZB-cnjc0
videocardz.com/amd/radeon-instinct/radeon-instinct-mi25
anandtech.com/show/2183
twitter.com/NSFWRedditGif

It's almost as if they actually are

Are these real benchmarks, or photoshopped?

Good thing I'm not planning to go with Nvidia next build, but RX 480/580/570.

it certainly looks that way doesn't it?

they're real.
reddit.com/r/Amd/comments/64r0hj/thanks_nvidia_for_letting_ryzen_5_look_bad/

The way you're meant to be played™

>lereddit
Fuck off back to r/amd, you street shitting filth.

DELETE THIS

Holy fuck NVIDIA get your shit together.

I never thought nvidia drivers would become so shit.

And AMD's got so much better.

You guys are so mean, nvidi is just trying to save the planet :^)

i'm starting to wonder if you really have all this spare or if you're literally on the clock.


ah, and
Osu~

I'd be more concerned about the magically scaling 7400. Apparently having lower clocks ensures better performance.

amada is cuuuuuuuuuute!

I love Amada!

i noticed werid results like that one. one shitpost thread on /g right now trying to shit on the r5 has a magical 7600 performing better than a 7700k at sock clocks.

You know these are anandtech graphs? It's literally on their front page.
Why even ask for source?

It's probably nothing more than a timer bug.
I'd be worried about Windows if this can be consistently reproduced

These results instantly cast doubt on how accurate the entire review is, regardless of whether the results work for or against the product being tested. As it stands at absolute face value with all else being equal these results are impossible.

i'm more worried that they benched with windows in the first place.

The hell? This isn't the first time this has happened, OS scheduling is complicated, and outliers will always exist, just a few weeks back Ryzen had more performance when coming out of Sleep than it had from cold boot, a timer bug, and this is no doubt something similar.

To dismiss an entire review over this from a reputable site that's been running for some 20 years is nonsense.

>reputable
implying anandtech is reputable

You're the most reputable one, user.

>You're the most reputable one
implying i'm reputable

I can see you are.
I'm good at reading people.

Even with nvidia gimping 1600x looking good.
Just 7% less fps, $100 less price and no stutter.

>OS scheduling is complicated, and outliers will always exist

Sure, but if anandtech was worth their salt they would've gone "hmm, this looks odd, what is going on?" and either omitted the test from the final results or at least wrote a short paragraph that the results are repeatable and thus stand.

Given how these sorts of reviews work they probably only did a single run and just slapped the results in a chart without any real analysis. In fact the entire review (like many others) is severely lacking in analysis of the results they get.

Its why a lot of the ryzen 7 reviews were atrocious.

>"according to my learnings ryzen is shit at vidya, do not buy!"
>"Why is that mr reviewer? What do you think is causing these rersults?"
>"¯\(°_o)/¯ and i'm not going to investigate because fuck you"
>"Jesus Steve calm your tits and cut your hair"

From my experience most of these test runs all automated, they don't even see the results until they push them in an excel table later, at least after so many years I personally wouldn't be waiting in front o the monitor waiting for them to finish.

R51500X/RX580 is a 1070/7700 for half the proice?

>1500X
Get the 1600 for $30 more and don't be retarded

All that says is their method of proofreading is using auto spellcheck in MS word. So not only are they lazy they don't even look at the shit they publish.

The automation isn't a problem but when you get results like this they should go back and test properly. Plus it angers me most reviewers only run a game for 30 seconds to a minute which can mess with results a lot (particularly gpu reviews as Nvidia's boost skews results hard as cards can boost really fucking high before they heat up). I dislike Hardocp for many reasons but one thing they do right is their test sequences tend to be 5 minutes long which isn't so prone to those odd results - classics like counting the start of benchmarks where assests are loading in and fps drops way, way down.

games should be ran at least 30 minutes to get real averages
sample rate is too low, even 5 minutes is not enough
the heck else are they doing? it's their job to be precise

>1600x only 10 frames behind THE fastest single threaded CPU out there with blistering clocks on all games
Good shit, I'm tempted on going 1600x instead of 1700 and then go all in on whatever refined Zen comes out next year.
On another note, I wonder why the 6900k performs so well when the 1800x should be matching it.

>Nvidia has no drivers

Now It's like all AMD memes now belong to Intel and NVIDIA. 2017 has been fucking hilarious.

Who tests games anymore? It's all built in benchmarks now.

Why do you think everyone tests Ashes, Warhmamer, Tomb Raider? Nobody plays these things.

>I wonder why the 6900k performs so well when the 1800x should be matching it.
it's matching it, anything less than 10 fps is margin of error

also it's just 3 games pulling averages down warhammer, rotr and fallout4

It's long, but it's well worth it.
youtube.com/watch?v=nIoZB-cnjc0

tl;dw: Nvidia has a godtier software scheduler on DX11, but they can't do that shit on DX12, at least not yet. AMD has godly hardware that takes cares of the scheduling itself when the game is coded correctly.

real reason behind amd in every console

>On another note, I wonder why the 6900k performs so well when the 1800x should be matching it.

Some games pull the average down, also AGESA updates are incoming(or have already come on some boards in the last 2 days)

If you can decently schedule on AMD's hardware their chips are absolute monsters. AMD (in broad terms) packs considerably more hardware into their chips (that also tend to be denser) than Nvidia does.

Its why hawaii in particular has seen such huge improvements as games have become more and more tuned to the strengths of GCN. Doom generally is a fantastic example of tapping into GCN.

Its also why developers curse AMD's drivers to this day.

Jesus Christ, that improvement.
Yeah, I'm definitely going with Ryzen this year. A 1700 or a 1600 will do, this 3570k has served me well, but I think it's time to let go.

>that bw and latency improvement

Oh fucking sweet!

To properly use GCN's hardware scheduler you need competent coders which is not the case in the age of cheap pajeet-powered ports.

NO IT'S JUST AMD THAT SHIT

ANGRY.JPG

Nvidia got complacent and shitty, AMD nutted the fuck up and got gud

that L3 cut is more important than RAM
they promise another 6ns for RAM in May

Nah, L3 latency per MB was already slightly superior to even Skylake, a 25% DRAM latency reduction is insane, that's practically 2 gens of hardware updates.

Over at B3D someone coded a rough as fuck test to test out async compute. They were scratching their heads (much like most of the community was) when AMD cards were not showing the anticipated results (namely they were running slower but were not crashing - Nvidia cards ran like the devil they died as they couldn't handle the load and triggered a windows timeout).

It wasn't until a dude known as sebbi (who seriously knows his shit and one of the few deves on the site familiar with GCN) can in and explained why. Most of it was above my head.

Whoever designed GCN was probably some kind of savant.

>not even a fury X, let alone a pro duo

wewwewewewewew laddys

>they can't do that shit on DX12, ever

Fixed for you.

Oh Christ what's gonna happen when Vega hits if Nvidia is fucking up so bad?

Shills and deflection. Nvidia will start claiming AMD is too hard on developers forcing them to write good code.

>/r/Amd
totally legit
also you have to go back

These pictures are from AnandTech, you inbred retard.

Hardly - its just a gpu design that was forward thinknig in relation to certain industry trends. In some areas it is considerably behind Nvidia's architecture(s). Its why the hype train is building for vega - it looks to be the biggest shake up since AMD moved to GCN from Terascale.

Hawaii, polaris and fiji all focused on different improvements and vega sets to be all that plus more. However that could induce other chip bottlenecks or regressions.

realistically speaking what would i as a gamer (+ recording & cpu heavy emulation) but also as video, audio editor would gain from updating my bios with the newest agesa code?
Are the improvements actually noticeable in said workloads? I've updated my bios 4 times in the last month and i'm tired of it as i personally haven't noticed any improvement with the 4 previous ones, so i'm wondering if it's not better to wait 1 - 2 months for even more improved bios instead of updating every time

The AMD AGESA update (AMD official one) is the largest BIOS update and possibly the biggest BIOS update for the duration of the processor, it's very rare for the processor to not display its full performance at launch day, so this is pretty big, and considering how much Ryzen loves memory this is great.

As for what I'll improve, practically anything that skips the caches and goes for the DRAM, so pretty much everything outside of microbenchmarks that fit in the caches.
How much performance increase I can't really say, can be anything from 2% to 15% and possibly even higher in edge cases depending on how shitty the application is.

I wonder how the fuck HBCC works and what the fuck is it in general.

NVIDIA confirmed for poojeet-tier garbage

Nah, they used to write good drivers. Key word is 'used'. Why did everything go so wrong.

I guess they also sat on their ass doing nothing, just like Intel

HBCC is (to my understanding) basically about having an onboard controller that can address many different types of memory. It is primarily a way to reduce latency by ignoring the cpu (for example) and going straight to system ram. Its not a new concept but the latency penalty for doing it is enormous.

If you saw the those blue coloured radeon pro chips they announced a while back with the bolt on SSD? They are probably using an early revision of the HBCC so they can swap out vram to the SSD and back without murdering performance.

And that's why i love AMD. They'll never outsell Intel/NVIDIA, but hey, they are making cool new tech.

so basically if AMD wasn't a bunch of lazy brown jews and released vega and ryzen at the same time they would have basically won. see this is why women can't run companies

>see this is why women can't run companies
What? Why? Su basically unfucked AMD.

nah they are fucked. ryzen is a massive flop because nvidia and intel were able to steer the narrative. vega isn't due for so fucking long nvidia has the ability to destroy vega whenever they wish. AMD is walking ghost syndrome now they look healthy but they can't recover

Vega is held back because getting it validated is hard due to shitload of new features it has, plus that allows them to squeeze out more clockspeed and drivers for launch.

Sensible choice, everything in recent AMD GPU history (Fiji, Hawaii, Polaris) has one or two very obvious bottlenecks hampering down a pretty powerful architecture.

The heck are you fucking smoking? Get medication, Volta isn't out until like Q2 2018.

Vega is already fucking out by the way, but for the machine learning market.

People were saying AMD are fucked for the past, what, five years? And they are still here, releasing new CPUs and GPUs. Also Vega is the biggest change to GCN ever, so there's no reason to rush it out.

AMD will likely never fully disapear. just from the discrete GPU market they are effectively gone and their cpu market may limp along for a time

>five years?
More like 2007, when Barcelona launched.
Now they have a very competitive CPU arch and an incoming monster of a GPU

>just from the discrete GPU market they are effectively gone
What? They recently launched their Radeon Instinct line, and are about to release consumer Vega, their drivers are better than ever and ROCm is ready to compete with CUDA. Like, what the hell are you really smoking? Some kind of crack.

Vega 10 is bigger than GP102 by some 20% it would make sense for it to take more time to launch, remember that the full GP102 only launched a month ago in the form of Quadro P6000

>where's Vega

Here's your fucking Vega, and it's a fucking beast.

videocardz.com/amd/radeon-instinct/radeon-instinct-mi25

>12.5gflops FP32
That's actually a metric fuckton of compute. I wonder how well it'll transition into gaming performance.

>sabotaging ryzen with a card that was relseased before ryzen
AMDrones are literally this retarded

...

>300w tdp

Yes and?

>what are drivers

A) It's 250W
B) We can make up the power difference by using
a 65W R7 1700 instead of a 140W 6900K :^)

I don't really care. I just wonder how well will it's compute performance translate into gaming.

>5,000$ AMD dedicated compute card barely beats 800$ nVidia gaming card in compute
AHAHAHAHAAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAH

Said nvidia card can't do fp16/fp64 for shit, and can only dream about any int perf.

>Barcelona
for those interested in barcelona: anandtech.com/show/2183

Nobody besides server vendors have these cards for testing.

somewhere above a 1080 at worst, possibly touching a 1080ti at best.

If amd got that fucker saturated and literally did no optimizations that were not for saturation, then it's possible to get close to titanxp but i'm not even considering this an option.

It could also clock higher then the pro line as its not passively cooled, that may put it leagues above what nvidia has depending on how high the clocks are, but another thing i'm not even considering an option till shown, as I fully beleive amd is going to screw gamers again like they did with the 480 and only give us the shit enterprise would not take.

My shitpost has been erroneously aimed at the wrong poster, apologies.

12.5 puts in in Titan Xd territory if I'm not mistaken, but nvidia traditionally have managed to get more FPS out of less compute performance.

this shit will be expensive, huge die + hbm2

HBM2 is not that costly. Huge die is.

Vega has the same core setup as Fiji, 4096 shaders, but Vega on the other hand is clocked 50% fucking higher! Remember that Fury X boosted to 1050MHz..

Just from clockspeed alone it should put it in 1080ti teritory, the current Fury X has a VRAM bottleneck and it's only some 10% slower than a 1070.

Well then retard, buy a system and test it with both the latest and earlier nvidia drivers. If there's a significant performance difference, I.E. nvidia are deliberately sabotaging ryzen products, it will be one of the biggest tech stories of the decade. You'll get a shitton of pageview money and nvidia might even get into legal trouble.

But you won't. You won't even send an email to Anandtech suggesting that they try this. Because we both know you're just a butthurt fanboy

Su's AMD is basically a different company

The same will obviously be true about the AMD gaming version of the compute card too, retard

Autistic question, but what's the point of having over 60 fps.

Because you can.

60 -> 100 FPS is nice, but 144FPS is a meme

same point as in having over 30.

So what your shitpost was not just retarded, it was completely irrelevant.