Simulated R5 1500x/1600x gaming results, they're all the same

All Intel cpu's clocked at 4.8, and the R7 at 4.0.
Now, this is only a simulation of what the R5 will most likely be (taking into account what we know of ccx, cache amounts, and all, basically cores disabled), but if this checks out, we suddenly have a very strong jack-of-all-trades in the $150-250 range.

Other urls found in this thread:

techpowerup.com/231762/simulated-amd-ryzen-5-series-chips-as-fast-as-ryzen-7-at-gaming
twitter.com/NSFWRedditGif

>simulation
>Simulated results

what the actual fuck are you on about?

Another graph to throw the shill claims aside. Also I forgot the link like a retard

techpowerup.com/231762/simulated-amd-ryzen-5-series-chips-as-fast-as-ryzen-7-at-gaming

And another

What we know of the 1500x and 1600x is being the same as the R7's, only with less active cores. So by disabling them in each cluster at the bios, there is a chance of ending with the same cpu specs as these, hence having an idea of what they can do.

Poor or very weak multithreaded programs. Most likely most of the main work is done by single core and the others cores function as auxilliary for things like audio threading

Decent multithreaded program. Looks like the bulk of the main workload is spread evenly across the cores.

I wish they showed windows core usage for when they do benchmarks so we can understand what the actual limitations are in those benchmarks. Are they really "cpu bottlenecks" or just poor multithreaded applications.

I'm betting on poor optimization. After all coding a software for multithreading isn't as simple as "just use all the threads you can see". Look at the gap between the 1800x and the "1500x", 10 to 15% at most. Then compare it to the gaps at the Intel side. My bet is Mafia 3 is optimized mostly for 8 threads, and the small increases in performance are probably others windows tasks offloaded to the extra threads. I bet if we limited Ryzen to 2/4, performance would drop like a brick.

@4.0
I would like to see proper GHz instead of OCed once.

Intel's offerings are also heavily OC'd.

This was more of testing to prove a point, rather than an actual review. If anything, it's surprising to see how games are already coded to go above 4 threads. The techspot source has a couple more on testing btw.

I really hope Zen 2 prioritises clock speed upgrades

i thought g is about thechnology, CS, IT and shit and all i read in those amd threads since 2 weeks is muh gayming on ryzen.

so g is basically v for autistic fucks?

>discussing how the coding behind a specific type of software interacts with different hardware

I could agree with you if we were discussing the games themselves, but it's not the case

Probably.
I just dont get it why it's important for a cpu to compute 200fps in a game instead of, say 150fps. It shouldn't matter anymore at this point.

Test it to 1440p and 4k

>at this point

Keywords right there. If we assume a linear requirement progression in the next years (we have nothing that guarantees us even further multithread support on software, or extra instruction sets), those 200 and 150fps become 150 and 115, which becomes noticeable on 144hz screens, or even more extreme, 60 and 45.
The point here on the other hand is that dropping a few cores is doing little to no difference on what's currently available, which can throw things off in the mid price brackets: if you do the average home use scenario, light multitasking and little more than gaming (which let's face it, is a sizable portion of Sup Forums), the R7's are not a very good idea: at $300+ you're right into 7700k territory, which curbstomps it at anything that isnt heavy multithreading. Dropping the "unnecessary" cores and setting it at half the price turns it into a very appealing offer for this demographic.

I don't know anything about linear requirement progression.
It doesn't seem that the performance requirements will rise that fast, judging on the past. Take the i7 2600 or the q6600 as an example.

I do not believe, that 8+ threads in games will become a thing, but i do believe that more cores equals more comfy.

Also, 144hz is a meme for gaymers.

Im betting on single use engines dying/becoming irrelevant so everyone is on either a publishers engine or licensed out cryengine, unreal, or unity and the few games that use special engines either have no need for multithreading to begin with, and would play on a toaster, or the team that thought it was a good idea is so incompetent that the game is not worth playing to begin with.

nah, clock speed is good enough, what they need to focus on is decoupling the inner workings of the cpu from ram speed, could be possible to do that through bios.

Because shilltel shills have been shillposting all over the place for weeks.

honestly having 144 right next to a 60 and going from screen to screen, its almost painful to do anything on the 60hz because I can see how choppy even moving a window is.

because most games can be pushed close to 144 even on shit processors and even applications feel better on 144, I cant call it a meme, but what I can call a meme is 200-240fps monitors, as most games cant be pushed that high on cpus, and even the few that can, can't be pushed at 1080p without turning settings down.

Don't get me wrong, Intel makes great hardware.

You mean its like working on a pc without an SSD?
I mean, 60fps feels just decent for like 99% of games you would play.

his point is higher hz monitor feels better, it's not so much about fps but about monitor
honestly I can't tell the difference between 90 and 140fps when monitor sits at 144hz

Eh personally I couldn't tell the difference between 60 and 144hz when switching between them. Must be down to one's perception I'd reckon. I do however notice anytime I drop from 60 to 40-45 fps though, so if I can keep steady fps, all the better.

mhm i remember working with and playing on crt with >72hz. didn't notice that much of a diffrence when i switched to 60hz TN.

I meant linear in the sense that cpu A and cpu B keep the 33% performance difference over the years. At points one will be just enough for the task while the other will struggle to keep pace.

Okay i got your pointm thing is r7 has more overall performance than i7. xd

I thing r7 will age very well, as the old i7 cpus did.

Anything over 100MHz is a meme for games that aren't FPS.

>100MHz
hehe

You haven't done that though you knowitall fuck. There hasn't been a single high level point made itt beyond "optimization is probably bad."

>technical discussion isn't technical

Ok then

Quadcores as top of the mainstream CPU selection needed to go into retirement at least 2-3 years ago.

It will probably focus on IPC and raising the clock in the most efficient performance ranges (say 2.4 to 3.0 GHz now with v1), because that is how you make money in servers.

...

they think by cutting cores they can predict the performance of the lower binned chips that are the r5 line.

>everything i do is multi-core or handles multi-core well.

That site looks like an AMD shill with those benchmarks. Ryzen is almost 20% behind a 7700K in gaymen.

gtx 1070 seems to be a bottleneck. they also tested with the 1080 ti and the gap was much wider.

fyi, techspot = hardware unboxed. it's literally the same guy who does the benchmarks for both.

In my case, not nearly all. But when I look only at the tasks that actually need the performance, those are multithreaded and can scale or run parallelly.

I don't really need MP3s to play or encode faster.

Post some ass

>4.8GHz
>heavy
It's a mild overclock.

...

> He proceeds to cherry pick a game that runs 999 AI bots
kys

Bullshit. Only 78% of 7700Ks even make it above that frequency, and even less at non-housefire voltages. That drops to 59% at 5GHz and 28% at 5.1GHz. At best it's 200MHz off the maximum possible overclock for the vast majority of chips.

what?

Pleb at least oc the 7700k to 5ghz 4.8 is only limited to poor bins
Also they are giving you can always oc ryzen to 4 GHz and even the most shitty bins so it's not a realiable simulation

To be fair, only 20% of R7's clock to 4.0Ghz.
But if you lower the target to 3.9, the percentage skyrockets to 70-80%, at least according to silicon lottery. Losing 2.5% on clock isn't something to pull your hair off really.