AMD AGESA 1.0.0.4 improves performance by 2.45% on average

HAHA Intellshills BTFO

"The results of the update are quite clear. While small, every single game gained performance, some more than others. On average the gains were 2.45%, with the top gainers being Ghost Recon Wildlands at 6.07%, Rise of the Tomb Raider at 4.33%, and Hitman 2016 at 3.73%."

Other urls found in this thread:

thetechaltar.com/amd-ryzen-agesa-1-0-0-4-testing/
twitter.com/SFWRedditImages

original article: thetechaltar.com/amd-ryzen-agesa-1-0-0-4-testing/

Shaving whatever little gaming advantage the delidlake had.

Did they test anything multithreaded like Handbrake? It should benefit from this.

In case if anyone wondering. Ryzen 5 works flawlessly with outdated bios, but you need to update to version which officially supports it if you want to overclock/set ram speed +2133Mhz.

On side note, shuttering and odd frame drops while looking at certain parts of the map disappeared while playing War Thunder. Previously had 2500 non-k.

>Did they test anything multi-threaded like Handbrake? It should benefit from this.

Not that I'm aware. From what I've gleaned from forum posts is that the update seems to make memory overcloks less stable (like going from stable 2933 to 2400) but that Cinebench scores still went up from previous BIOS version.

Better benches will come next week I hope.

>AMD gets better as you move up resolutions
>somehow all the intel shills want to test unrealistic settings like 720p low
>butttt muh GPU bottoleneck

Dear sir/madam,

I politely ask you to delete this thread as it hurts the image of our company Intel™ and our Core™ processors.
Sincerely,
Moshee Bergstein, Professional Social Media Relations Manager at Intel

What makes these things even funnier is they're trying to predict some future where the GPU isn't a bottleneck.

Hilarious, 4k to 8k requires a GPU 4 times stronger than one capable at 4k60Hz, at best that's 8 years of progress, at worst 11 years.

Do these retards seriously think we'll still be at 1080 by then?

What's the point of a 7700k?

It's the only CPU that can do 700 FPS in CSGO at 600x480

So a BIOS update yields better IPC gain than skylake -> babby lake
lmao

Did Broadwell even have higher IPC than Haswell? Didn't seem so to me

FineWine™ in action.

Were Ryzen 5 reviews done with this update?

No.
The AGESA updates rolled out 2 days later.

you buy it if you play games at 500fps at 720p obviously

it did

I like it how Intelfags are literally begging for any amount of IPC increase at this point, even 4% will do, just so they can move on from Skylake.

Too bad soonest for that is 2019 with Icelake :)

Just shows that there's plenty of juice left in the arch, a general increase in clocks, IPC from bigger FPU/integer units and better branch predictor, and decrease in memory latency will make Zen+ a monster.

I wish this included a 1800X and a 6900k for reference.

It's a damn shame so little reviews do these relative performance charts, they're way more useful than cherrypicking one game

These threads are always the same. At launch the board gets flooded with threads shitting on AMD's new products with so so benchmarks where Nvidia/Intel win. Some months later when AMD have greatly improved performance to the point of being on par or beating the competition the Nvidia/Intel shill posters dry up. I guess they don't want to admit they were wrong or their $2000 'gaymen' system they got cucked into was a waste of money.

...

If you don't want to spend much on memory just get some 2600MHz dual rank memory, should offer the same performance as single rank 3200MHz

>AMD gets better as you move up resolutions
It's because high max FPS stops padding the Intel averages, while the minimums for all the Intel CPUs, even the 7700k, sucks in a lot of games.

Didn't you know? 800x600 GAMING is a realistic CPU benchmark, not the few dozen specificity benchmarks made to explicitly test the CPU, oh no, those are pointless.
4k and 1440p gaming? Why would you want 1080p gaming? Everyone plays at 768p anyway but a good amount still are at 480p

Stuttering emulator

so how much does AMD gain from all these different updates in total?
>mobo updates
>memory updates
>windows updates
>game updates
>video card updates

seems like at least 15%?

Depends on the application, some mobos already had the performance on launch day, they didn't need performance updates.
Memory improves Intel too.
Windows is pretty application specific.

but its true that the day 1 reviews are not relevent at all anymore right?

Mostly.

Game updates can be over 30%
Ram going from 2133 to 3200 is 20% improvement in games like FO4. (Intel CPUs also get like 15%+ improvement there, though)
The others combined account for maybe 10-15%.

damn,
so we are looking at like... 40% improvement over the day1 reviews....

There are some 15% differences in performance on different motherboards at stock. what the fucking shit.
Especially teh ASUS, that's a fucking $300 board, what the fuck are these retards doing?

Playing CS:GO on a 555Hz monitor, of course.

Will we see those in BIOS updates? Or will that be a revision of the current Zen architecture we have now?

Those are all silicon changes, so it's all Zen+
Some more latency improvements could be gained from BIOS updates, but how much I can't really say.

Well ignore the GoW4 CPU render one. That result is pretty meaningless, as terrible as it is on the Prime x370.

But yeah. Remember how most of the day1 gaming reviews looked awful if not on the Aorus? It's better on the actual GPU using results.

That's a bench from like mid March. Things are different now, and were different day1. Hopefully someone does some new tests soon.

On BIOS the past week, people have been able to get stable overclocks they had on 0.05v lower or so. Like 1.35v when for 4ghz when they were needing 1.4v before. So yeah, that's improving too.

Don't forget, it took A YEAR for DDR4 issues to be sorted out when it first launched with Intel.
I'd say things are progressing quickly, and it looks like the vast majority of minor performance issues will be solved by June.

No, it's actual literal paid shilling. You don't pay for the shills for months and months; bad cost-effectiveness. Instead you pay for them at major releases - Ryzen release, RX480 release, etc. so that you get the low-hanging fruit retards excited about building a new computer with more money than sense or patience and who get easily confused by charts and easily manipulated by rhetoric. You'll see another influx of shills around Vega's release.

Also AMD has an absolutely shit marketing team; as far as I can tell they don't do any kind of guerilla marketing bullshit like Intel and Nvidia do. It's entirely AMD fanboys doing it for free.

WOW 2.45 IT'S FUCKING
NOTHING


PLEASE MAKE IT STOP

i'm regretting getting a 6700k

And another microcode update next month to improve RAM speed and compatability, so things will look even better.

>microcode update gives more performance than 2 generations of Intel processors

I'm sorry but that's really funny.
Also your CPU is obsolete junk.

>microcode update gives more performance than 2 generations of Intel processors

This is by far my favorite meme of the month.

Man, the 1600 is an amazing option, the value it gives is insane. I'm really tempted of going with that instead of the 1700 so I can go full retard with 7nm Zen+.

>I'm really tempted of going with that instead of the 1700 so I can go full retard with 7nm Zen+.
That's exactly what I decided to do.

Except I'm getting a 1600X since I wanted a cooler that I'd be using with 7nm Zen3 or Zen4 anyway.
Just waiting/hoping the 1600X drops $30 or something soon. I need to wait for my AM4 bracket anyway.

I figured if I got the 8 core, I'd be more reluctant to upgrade later, even if it's like a 10% IPC increase and 10% clock speed increase. But going from the cheaper 6 core, which is more than good enough most of the time, to the even better per-core 8 core will make it easy.

>so we are looking at like... 40% improvement over the day1 reviews....
Well that was kinda required because day one reviews were sandy-bridge tier performance.

For pc monitors, possibly.

1080 is the last resolution a 24 inch monitor has where the scaling is perfect and you don't need to squint to see the smallest text you normally encounter.

1440 = 28-30
4k = 40-50
8k would need 80-100 inches to be in that perfect scaling range.

we will hit a point where gpus find 4k trivial, and by then 4k with scaling may be the norm that no one even thinks about, but I really find it difficult to go from lcd to lcd that is at best a side grade, I want lcd to oled or actual quantum dots.

because Intel looks bad in those.

remind me, wasn't intel locked at 1866 memory for a year?

with this AGESA-update, can ryzen-boards finally officially support 2400mhz with all four RAM-slots populated?

Why pay more for maybe 100-200 MHz better overclock than the 1600? You might even be able to sell the Wraith cooler for additional savings for the next upgrade.

I might want to just run it stock. It hits 4.1 stock on 2 cores or 3.8ghz all core, I believe.
I'm really not hurting for money, either.

I feel I'll have no need to overclock it when I vsync to 60fps. I'll be getting a 1440p or 4k HDR10 monitor soonish too, in which case I'll probably have it locked to like 85 or 90 fps. The 1600X stock should handle just about any game I'll play like that.

1600 overclocked is definitely the best performance/$. Some are only getting 50-150mhz more on the 1600X it looks like. But I just don't care and I want the 1600X.

I'd just splurge another $40 for a 1700 then if I'm in the 1600X's price range

But you'll likely get 4.1Ghz on the 1600X, while the 1700 will likely be at 3.8-3.9. That matters if you're not really using all those cores.

I already explained why I'm going with the 6 core for now. Though I... might actually change my mind. The 1700 might be best to keep to use as a home server later, while the 1600X I would resell. I haven't gotten into home servers yet, though...

I've seen plenty of 1700's at 4.0, and even if not I won't really mind a 200MHz 2 core loss, if the 1600X could do 4.1 on all cores then we'd be talking

Considering this is a brand new arch, I think it's more than likely that just waiting is the best answer for higher OC results..Once the fabs get their shit sorted we should be seeing better numbers.
I mean, fuck, look at Haswell when it came out. Wouldn't OC for shit, near the end of it's Intel Prescribed shelf life, it was consistently hitting 4.6/7.

>plenty of 1700s at 4.0
yeah, that's called fucking luck. I'm not going to leave it to luck that I get a chip that only hits 3.8 at 1.35v when I could be hitting 4.0-4.1 at that by buying the one that's better binned.

I'll buy the 3800X, 3900X(tentative), 4800X, or 4900X later.

>paying 60% more for 100-200MHz higher

Well, your money I guess

It's luck anyway, friendo. If you think you're going to be guaranteed 4GHz with a 1600X, you better prepare for potential disappointment. There's not enough data about yet to make judgements on that, but there are certainly 1800Xs which won't do 4GHz on all cores at a reasonable voltage. It sure won't be 1.35V either. Anybody claiming they're stable on all cores at 4GHz at that voltage simply hasn't done enough testing. 1.4V is the bare minimum for the best chips.

Yes, well the 1600X is not that much more when I wanted another cooler anyway.
And when it comes time for a more "final" upgrade, I may as well spend more for the best 8 or 12 core.
If I'm spending $1500 on a monitor, it's not much more on total system cost and blah blah.

It seems like 4GHz is pretty guaranteed on the 1600X from others' results.

>If you think you're going to be guaranteed 4GHz with a 1600X, you better prepare for potential disappointment.

pretty sure 1600 hits 4.0 at 1.45v 70C all cores

>at 1.45v
Jesus fuck.
I wouldn't call anything that isn't maintainable "guaranteed"

that's the point, if 1800x anything to go by, 1600x will hit 4.0 at much lower voltage

someone post the "DELID THIS" picture

>wah my slightly higher voltage CPU running 24/7 will last only 11 years instead of 14 WAAAH

1.35v isn't some magical fucking cutoff point

> at 1.45v
Not knowlegeable about the recommended voltages, but that seems a bit too high.

K. I can put the 1600X at 1.45v and see how much higher it goes then. Might hit 4.2 instead.

It is. AMD says it shortens the projected lifespan. But who knows how much. They say 1.4 is safe and it seems like a reasonable recommendation given the Fmax/Vcore wall that Ryzen hits after 1.35-1.4

You're a fucking moron.

The idea is to make sure the CPU you buy TODAY is still going to avoid a CPU bottleneck one or even two GPU upgrades down the line.

No one fucking cares whether a high end CPU of today will run with a GPU of today in 3-5 years time.

Jesus fucking christ
I've been looking into upgrading my setup but now I have no idea when it should be worth it
Anyone have any idea when is Zen+ gonna hit the markets in Europe?

You're the fucking moron because you think games will remain the same as they are now and you think game/engine requirements stay the same on low resolution and at high resolution.

2018, power by "Just Wait"®-Technology.

But he didn't mention Icelake or the mythical desktop cannonlake :)
Or Volta

Of course they're not going to remain exactly as they are now - Nevertheless we're going to be using DX11 and DX12 for the lifespan of any CPU bought today, and dramatically increasing framerates (by reducing resolution, unless you have a magical infinitely powerful GPU you'd like to share) is basically the only viable way of simulating the demands of future games.

Reviewers have been doing it for DECADES, and they've been doing it for a reason: It works. Look at Bulldozer, for instance. Everyone decried low reso benchmarks because only with complete GPU bottlenecks did it even approach competing. Now we're seeing the same shit again, except obviously Ryzen isn't a complete pile of trash like Bulldozer was.

Would it be significantly worth upgrading to a 1600/1700x from my 2600 non-k?

Doing a fair bit of gaming, but also lots of multitasking and media conversion / encoding / editing...

Still considering waiting for Intel's 6+ core desktop CPUs over the next 8 months, but I'm also feeling impatient.

>no asrock

pff

>i'll simulate the demand of future games by putting everything on clockspeed and drawcalls

Are you this fucking retarded?
Actually, don't answer that, you are.

>not going for the first critical point in the graph to get max efficiency

>Would it be significantly worth upgrading to a 1600/1700x from my 2600 non-k?
Yes

>Still considering waiting for Intel's 6+ core desktop CPUs over the next 8 months, but I'm also feeling impatient.
>supporting jews that kept you waiting so long for a worthwhile upgrade while trying to make you rebuy the same old shit instead

But Bulldozer really did get better. For Honor is a new game and it's much better on a FX8350 than an i5-2500k.
But Ryzen is nothing like Bulldozer.

really? do you have a guide on how that shit works, i still cant wrap my brain around single, dual rank, and what ram to get, etc.

Effectively the best choice will be 2600mhz dual rank memory once memory multipliers hit in May, so you get both benefit of higher frequency and DR

Na, because a large part of why high frequency is good on Ryzen is because it makes the infinity fabric operate faster.
It does not seem to care about the latency as much. 3200 CL18 is better than 2133 CL10, in most cases

I'm JustWaiting(TM) For zen+, I'll avoid most of the AM4 troubles, much more mature motherboards, and end up with higher IPC and clocks

Cannonlake will be much better, It'll demolish AMD without a trace left

I don't even see how it's a meme, it's just logical. Totally new architecture needs optimization before you can take full advantage of it, versus a known architecture that's been pushed to its' limits and with a redesign long overdue. Guess which one has more potential performance to gain from microcode updates?

>a skylake dieshrink
>coming in December in 2 core variants(mobile) with a hopeful outlook
>yields such a shitshow Intel canceled the server cannonlake lineup
>if desktop canonlake even comes it'll be another desktop broadwell out it's still fucking skylake at 10nm
>still 4 cores
>meanwhile zen+ is in Q1 2018

Lol no.

>JUST WAIT!!!

Cannonlake is mobile only, not desktop, btw.

2.4Ghz 6 year old xeons exist you know

Don't forget the fact that Intel themselves stated that 10nm will be slower than 14nm for a while until yields go up.
People expecting a 6 core clocked at 5.0 need to get their head examined.

I like the fact more that IPC has become literally worthless to Intelfags now and all that exists is clockspeed, almost as if silicon shits itself over 5.0, where do you even go when your shit is already maxed?

Reminds me of netburst. Hilarious.

when will ryzen support 3200mhz dual channel?

?

You mean dual rank? Whenever someone makes them, dual rank memory doesn't go over 2900MHz IIRC
All those 3200-4000MHz kits are single rank

oh I'm retarded

I assumed that dual rank meant the same thing as dual channel and that ryzen didn't support dual channel above certain speeds

I feel dumb

anyway, what's the advantage of dual rank versus single rank?

1440p 32" has the Same Pixel density like 1080p 24"

>not getting octo rank memory and overclocking them

Ultra poor and fag.

Does Ryzen even support LRDIMMs?

oh ok I guess it has to do with address space and memory addresses

>having more ranks gives better performance. The reason is because of the addressing scheme, which can extend the pages across ranks thereby making the pages effectively larger and therefore more page-hit cycles.

never knew this, guess I learned something new

>anyway, what's the advantage of dual rank versus single rank?
Dual rank is faster with Ryzen.
www.pcgameshardware.de/Ryzen-5-1600X-CPU-265842/Tests/R5-1500X-Review-Mainstream-1225280/3/

are there any 32gb kits atm that can hit 3000 without fucking with voltages too much or causing thermal problems?

Dual ranks are much cheaper than high clocked single ranked.
They're also faster than single rank of the same speed, pretty near the 1 tier higher MHz memory of single rank

And it's only a matter of time before regular hynix 3200 shit runs on Ryzen and memory multiplier show up, I'd personally get the highest clocked dual rank kit I can find and go with that.

Hm. Neat.
My 2x16GB I got is dual rank. Hopefully I can overclock it to at least 2666 or 2933.

we've seen nice gains with dula rank RAM, I will withold judgement until memory controller FineWine is fully develop