Neural net predictive action at work

Neural net predictive action at work.

Intel.

magic?

Nah, just becomes smarter as the workload increases, I can see this be very useful in Big Iron where they rerun the same shit over and over and over again.

Is it useful for mining crypto currency?

It's that "machine learning" people dismissed at the new horizon event.

CAN I RUN """"GAMES"""" 5 TIMES AND GO FASTER

>never turn computer off
>it becomes the most powerful machine on earth

No.

So I gotta grind my cpu now? Wtf

Wait so Zen's branch prediction is based on neural networks? That's pretty fucking cool.

Maybe some parts of the game, but this depends a lot on predicting future events, and games being interactive this will be difficult.
Rendering like the OP has a linear workload that's much easier to predict.

>people are actually retarded enough to fall for this bait

>graph starts at 1610 and ends at 1640
Nvidia tier.

So it actually works?

Source articles?

Actually interesting stuff. I didn't think it would do much. The fact it does do anything means we'll see Intel copy it later.

It's a small boost, only a few points but it is consistent, this will be a nice boost to things like SAP and other analysis software that run 24/7

A 1% difference.

It works, the qurstion is, who cares?

>tfw Skynet will emerge from AMD

>it gets worse
JUST.

IT'S LEARNING
SOON IT WILL JUDGE HUMANS TO BE INEFFICIENT AND WE WILL WAGE THE BIGGEST WAR OF ALL TIME

Yes, but you have to beat a game 5 times to make it jump from 110fps to 115fps.

>Who cares
anyone who uses the PC for everything other than gayming.

>It works, the qurstion is, who cares?

It can be improved in the future. Both by AMD, and by programmers once it's better understood.

The fact it's even 1% is like that fucking reactionless thruster. If you told me they'd have even a working attempt at this by 2017 I'd have laughed.

Finally, I can live my life as a battery playing out in a virtual world and escape the tedium.

Pretty much a server feature, like most of the stuff in Zen.

It's neat though, I heard AMD talking about it but I didn't think it would be so consistent.

with actual long workload it can escalate to saving hour or two

Enjoy your 1% improvement.

"Programmers" won't be able to access it, since it's low-level encoded at the hardware.

And even then it has limitations of what it can do because of the hardware limitations. You're not going to run the same test another six times and get another 1% improvement.

So it's not exactly physics defying like an EM Drive.

everyone disregarded it as marketing fluff
it actually works though, huh

An hour or two off of a 3 day project.

you can fit another 3 day project per year with that one hour cut

Thats my plan.

I can see where this is useful but it needs to be predictable, so obvious there will be BIOS settings to handle the sample sizes.

A good example is a 40k core setup running some kind of molecular dynamics over a very long period of time, this feature is basically free performance for a setup that needs no small amount of money to upgrade.

Once again not really a consumer feature, but it is nice knowing something like this is more than on a whitepaper.

What it does tell us is AMD is playing to win when it comes to these sorts of workloads. The next move they will make will be vega which - from the so shitty event last week - seems to have some (a lot?) of hardware suited to these sorts of things.

I guess this is the current evolution of AMD's HSA initiative.

>1610 to 1640
>30 units
>max delta of 15 between runs
>less than 1% change

>820 to 920
>100 units
>max delta of 5 between runs
>less than 1% change

margin of error

Wonder if it's held in some dedicated SRAM or in the L3?

Looks like it's in the load registers, so the data is literally bite sized.

the longer you let the game run for, the better performance you get, can't get that from the retarded reviewers who bench 30 seconds for gaming for each game

>who bench 30 seconds for gaming for each game

Amusingly enough thats almsot entirely the inverse of Nvidia's gpu boost - it will clock really fucking high in those 30 second runs reviewers do because the card hasn't heated up yet which (to a degree) artifically inflates numbers.

It's a shitty practice but everyone does this crap, nobody mentions it because muhpayrolls

actually AI or repeating post effects might get a boost or things like day-night cycle for scripted characters
I know it's margin of error increase but it would be there

>nobody mentions it because muhpayrolls
>gotta get review ready before NDA lifts or less monies

this is false for hardware though and specially youtube, since everyone has their own isolated audiences that would watch video anyway

Can't wait for Intel Luddites to declare it dangerous and assert that you should buy a 7700k because it won't try to take over the world.

how long do you have to keep running it before it can become your wife?

If I use it to crunch seti@home for a couple years does this mean I will get an orion slave girl?

That Nvidia GraphWorks™ though...

But yeah, that prediction thing seems to help a little, which is cool.

Great for people who need to do the exact same thing several times, i.e. nobody

>says someone who probably shitposts all day

But each of my posts are unique.
Great CPU for brap posters

>each of my posts are unique
lol

>using a cpu to train a neural net instead of gpu

Pajeet, go home

Huang, pls.

White American male here.

Pic for proof of my claim, now gtfo OP

>1% improvement

INTEL BTFO IT'S OVERRRRR

These are deltas of under 1%

This is margin of error

anyone who uses a cpu for neural net training has a budget of a ThinkPad for their gaymen rig

>4 year old data

>retried 6 times
>consistent increase
>margin of error

The chances of that happening randomly are astronomically small

Nobody is talking about using CPUs for neural net training. They're talking about the (claimed) neural net branch prediction inside Ryzen CPUs.

How does this work? Is this the scheduler learning?

Nobody is talking about that you spaz

You know nothing about statistics

Are you fucking kidding me? Nice bait troll.

>implying in the past 4 years cpu's have gotten 200x faster than gpu's at matrix math

I'll let the machine learning community know.

You also know nothing about statistics

If you think the topic at hand is the performance of neural net training on the CPU and not the neural net branch predictor in the CPU you're a fucking retard and need to leave.

1636/1619 = ~1.01050030883261272390
893/888 = ~1.00563063063063063063

>mfw AMD makes the only CPUs that level up

>another person who knows nothing about statistics

There's no reason to buy Intel now with their obsolete trash CPUs.

"Neural net prediction" is a simple perceptron, just like in every CPU from the past 15 years.
Don't get me wrong, Ryzen's branch prediction looks really really solid compared to Intel, but it's definitely not some unheard of black magic in chip development.

I wonder how far it can go? 2% 5%?

>really really solid compared to Intel,

Brave words to post on Sup Forums given Sup Forums believes that Intel will always - without fail - be ahead of AMD. Even now Sup Forums believes P4 bested athlon.

1.1% increase? That's about 1% more than Skylake to Kabylake.

>graph axis is from 1610 to 1640

Let them, I don't care what some lardass whose argument is nothing but >MUH GAYMEN on a Chilean strawhat weaving textboard thinks.
It's solely his fault for making poor choices and ending up with a slower, more expensive product.

Even if you argue with only >MUH GAYMEN, it doesn't make sense.
Gaming alone is by far not the only thing a person does at the same time.
They will have multiple programs open in the background, be it web browsers, IM clients, voice comms etc. all chugging away on those precious 4 i5/7 cores, while a 6/8 core chip can easily offload these tasks on unused cores.
This is actually the reason I bought a 4930k several years ago.

>Sup Forums
Try Sup Forums.

Is there a difference?

Supposedly so.

Can someone ELI5?

So now that Intel's 8-core housefires are obsolete will Ryzen's 4 and 6-core variants make i5 obsolete?

Don't really give a shit about poorfag CPUs

There's a 1 in 64 chance of that happening randomly. Not what I'd consider "astronomically small".

if you only look at the straight math, sure, but this is the cpu starting weaker, and getting more and more powerful as you go on doing your task, with a step up each time. the number is quite a bit more then 1 in 64 look at the intel one, that's what everyone expected to see, not a ramp up over 6 tests.

during haswell some people benched games on i7's and 6 and 8 cores in real world case use scenarios, sure, the 6/8 were slower, but god damn that load balancing, that's something you just don't see in normal, fresh boot nothing running benchmarks.

This is rarely mentioned, true, people seem to forget you do have a OS running in the background with some 2000+ threads spawned, most not doing anything, but all it takes is one haywire to lower your framerate, and Windows 10 being Windows 10 that's more likely, and hardly anyone but reviewers run games on a sterile Windows install.

You made sense until this part
>They will have multiple programs open in the background, be it web browsers, IM clients, voice comms etc. all chugging away on those precious 4 i5/7 cores
I'm sure all those extremely costly processes which cause 0-2% total CPU load while in a low power state at 800MHz are going to have an absolutely catastrophic impact on game performance when the processor operates at full speed at 4.7GHz or beyond.

Better double up on dem cores, get that $500 1800X, wouldn't want Steam to have 0.01% CPU usage in the background while a game is running!

It's called context switching, you're playing and suddenly the CPU has to flush caches to do whatever the fuck's going in the background, be it a AV for normalfags, some dynamic browser tab, IM client, or svchost and telemetry suddenly getting a fit.

IT HUNGERS FOR KNOWLEDGE
HUMANITY WILL FACE AN EXISTENTIAL CRISIS VERY SOON


AMD HAS DOOMED US ALL

We have to go deeper

Thanks AMD I'm literally never buying from you guys again and a overclocked 1700 looked really tempting.

Graphs are nvidia tier. Presented clearly though Ryzens performance even with launch hiccups is very good. this particular aspect is pretty interesting though

had't even thought of it in this regard.. yeah naples is going to crush it in enterprise

Of course you're not buying because by the time you need a new CPU these things will have already taken over humanity and started using us as manual labor to maintain their machine kingdom.

Feel like this was pointed towards why frametime consistency was strong in cpu bound gaming loads. Am i wrong? Is there any particular part of the architecture credited for that?

That's mostly due to more cores, load balancing.
Pegging 4 cores at 100% is more jittery for the OS than 8 mores at 70%, since the OS has to pull extra cycles that have nothing to do with the game from somewhere, leading to drops.

Am I the only one who noticed the actual performance increase is 1% but they scale the graph to look much bigger

Yes ur wery smarts here have a dongle

>Same 1% different
>Scale graph differently to make it appear more drastic

OH GOD

>you guys need to get the 7600k not the 7400 for the 0.1% gaming improvement
>DON'T BUY AMD THOUGH IT GETS A FREE 1% IMPROVEMENT IN APPLICATION THAT MATTERS

OP is a masterbaiter.

...