Daily reminder that you aren't allowed an opinion on sweet-spot gaming/workstation CPUs if you don't buy 4core/8thread...

Daily reminder that you aren't allowed an opinion on sweet-spot gaming/workstation CPUs if you don't buy 4core/8thread chips. They are consistently only marginally worse than -Extreme-edition CPUs of Intel that cost kazillions more, they are considerably better than 4core/4thread chips/they demolish 2core/4thread CPUs for those tasks in most tests, and if you believe the 1% of the time you spend on your Workstation to render on Photoshop and not the 99% of the time to actually create warrants a 99core CPU you are seriously deluded and should not speak of CPUs again because everyone and their mother knows by now most of the parallelism nowadays is done on GPUs anyway which are the parallel machines on your system and CPUs will never stop being good mainly at the serial/non-parallel jobs and that notion not only may not diminish as you are mislead to believe but it may become even stronger now with Vulkan and other GPU oriented APIs that offload even more CPU stuff onto GPUs.

AMD fuck up with not promoting first thing the 4core/8thread CPUs too like Intel does. Intel knows how to shill by releasing first thing the 4core/8thread CPUs by binning them well, shilling with them first thing and passing them first thing to the reviewers. The Gamers and Workstation users are going to promote you for free, not a bunch of 60 year old no-lifers with a server farm, even if they pay more per chip.

Fuck all of them though. They are cartels that hog the patents. NVIDIA and AMD are a cartel of patents at GPUs and AMD and Intel a cartel at CPUs. Therefore AMD is the biggest patent troll of all.

Other urls found in this thread:

youtube.com/watch?v=nsDjx-tW_WQ
anandtech.com/show/5448/the-bulldozer-scheduling-patch-tested/2
twitter.com/NSFWRedditVideo

Hey faggot, you realize you can deactivate 4 of the cores on a 1700-1800X and up the overclock on those 4 cores even higher, using Ryzen Master...then save that OC profile, and switch to it when you want to use it with a particular use-case scenario (ie: games that prefer less threads and higher clocked ST performance).

So for less than the price of a 7700K, a 1700 can be far more versatile, and also actually has an upgrade path with Zen+.

It's too bad 99% of consumer plebs don't seem to realize this.

Yes, but what you miss:
> AMD fuck up with not promoting first thing the 4core/8thread CPUs too like Intel does. Intel knows how to shill by releasing first thing the 4core/8thread CPUs by binning them well, shilling with them first thing and passing them first thing to the reviewers. The Gamers and Workstation users are going to promote you for free, not a bunch of 60 year old no-lifers with a server farm, even if they pay more per chip.

>deactivate 4 of the cores
They don't bin them for that hence in all likelihood they are bad overclockers or even stock-clockers for that job. Intel bins those first i7s to be amazing overclockers or stock-clockers at 4c/8t. Only after their propaganda worked well at those chips they then consider to release the Extreme versions.

The thing is, those 7 series Ryzen series can fill either of those rolls...so unless you have budget constraints..there literally isn't a reason to go for the lesser model CPU's, because you can configure the 8c/16t CPU to be 4c/8t and clocked higher to closer match the ST perf of Kabylake, and switch between these OC profiles at the click of a button in the OS, using Ryzen Master.

Reviewers are just shit cunts, and have not actually delved into this, in fact the level that Ryzen was reviewed was a joke, next to no 4K GPU bound benches, no SLI/CFX reviews to show the capability of X370 against X99 in the HEDT segment, and the majority of reviewers not updated the bios on their mobo's, not getting their ram speeds to work properly (mind you, it is a platform bug, and shouldn't have been an issue in the first place, but AMD aren't perfect, even Skylake had ram issues when it first released).

All in all, this launch was a total cluster fuck, and most plebs here on Sup Forums and consumers in general, will have an negative initial reaction, due to not bothering to look into the discrepancies that have taken place.

if thats true why has no one benched any games doing this. it probably doesn't work

Intel has the advantage at binning though and therefore they are likely better overclockers or stock clockers. AMD has to do the shitty job of having a bazillion of cores to bin well first thing out of the factory. Intel does something smarter by only binning for 4 good cores at the early stages, they clock well, and the benchmarking goes amazingly.

But there is a chance that the whole advantage of Intel is that they have their own Foundries. They can do that stuff easily and sell for cheaper. AMD does have to pay fees to others and it may not have good access to do a lot of binning.

Some believe "Intel pays a lot to money sink Foundries". That's stupid because every year they can re-evaluate if it's a profit. Every single year they evaluate it's a profit to build more Foundries (that AMD doesn't have the money for).

The way they bin, is the X series chips (1700X, 1800X) are leakier chips, so they can scale to higher volts and reach higher clocks, so long as you can keep them cool (hence the higher rated TDP)..the 1700 is a less leakier chip, hence has a lower clock ceiling, and doesn't respond as well to volts.

So a 1700X should be the more recommended CPU really, as disabling 4 cores in Ryzen Master, would give it much more overhead in terms of thermals, and should (providing you have good luck with the silicon lottery) respond even better to volts.

Also I see no one posting about Jokers benches, showing a 1700 just barely behind a 5.1Ghz 7700K at 720p resolution, so in a CPU bottleneck benchmark, a 1700 was just barely behind a 7700K that was clocked 1.1Ghz higher (And the Ryzen chip isn't properly optimized yet..there is quite a few things that need to be addressed that will help improve the performance of Ryzen).

VERIFIABLE PROOF PLEASE

Read about Ryzen Master, the software suite akin to Watt Man, that allows the deactivation of cores, and also tells you which ones run the coolest, so you can choose the best performing cores to keep active and clock harder.

It's yet again another thing that was barely if at all touched upon at review by the useless reviewers.

AMD's fault for not releasing 1800X, 1700X, 1600X and 1500X at the same time. They fucked up.

The problem is 90% of the reviewing, promotion and shilling is done on stock speeds and stock core count. If Intel can win on stock speeds - i.e. without the reviewer ever touching anything on the BIOS or an overclocking tool about the CPU - then it won the game on "sweet-spot" choices. They know very well what they are doing and they are better at promotion, simply because they know very well that when you look at the big picture CPUs are still a sweet spot at 4cores/8thread because most of the parallelism nowadays on workstations (that don't render 100% of your life) and gaming is done on the GPU level, and CPUs will always remain mainly good at the non-parallel/serial stuff and that not only may not diminish as some are mislead to believe, it may become even stronger now that the newest GPU APIs offload even more jobs from the CPUs with their batch scripting.

alright, link to someone ACTUALLY doing this and getting above 4ghz per core.

only like 1-2 cores hit 4ghz lol.

but again, link to someone de-activating cores, and hitting above 4ghz on a Ryzen.

Legit

If anyone thinks Ryzen is actually good for work, they are fucking dumb as fuck and haven't done their research

dont tell him to go look for shit

link and show. also prove they will "overclock more" show some numbers.

I'm pretty sure it fails in most reviews because Intel specifically targets to bin well for fewer cores at the start of a new architecture or refresh of architecture. AMD likely fail because they don't have their own Foundries anymore which might make that job more meticulous (and more expensive to begin with). Intel are smart by not promoting first the 99core chips even if they profit more per chip since it's the gamers and desktop users that will promote for you for free.

Sup Forumsentoo

Your last point is the exact reason why more cores and threads will become more relevant as time passes, meaning a Ryzen 8 core chip is a pretty decent long term investment especially for the price.

Also Intel has turbo boost in all of the benching, they aint running on stock speeds, and hardly any of the reviewers got a decent OC on their ryzen chips because they didn't update their mobo bios (mind you it was at the last minute that AMD sent them the new bios's, so its not too surprising alot of the reviewers couldn't be fucked redoing everything they had already done).

>a Ryzen 8 core chip is a pretty decent long term investment especially for the price.


in no fucking world is a cpu ever a "long term investment"

what. the. fuck

I like how even when they find a niche job nobody in the real world cares about (e.g. "le gentoo compiling") the 4core/8thread CPUs are STILL doing reasonably well for their price even if they were not built for that at all and they are amazingly better at almost every other job.

I don't agree with you on the shit cunts reviewers part. I have owned a mix of AMD and Intel throughout the years, but I digress.

Bringing low resolution gaming up IS important. 4k gaming with a 1080ti makes it so your cpu might as well be a potato, or a shoe. So as far as testing at the correct resolution, I beleive 720p AND 1080p to be the correct methodology.

The point isn't that I have 450fps and you have 330fps, so who cares the human eye can't see it. It's the % difference.

Updating BIOS, well... motherboard companies are currently working on BIOS. So most updated to the most-current BIOS released. Some went further and contacted MOBO manufacturers to check if there were more recent versions.

As far as RAM speeds, most I saw hit 29000. Pretty meh between that and 30000 or 32000, and doesn't impact anything if other CPU's you are benchmarking use the same RAM. Very negligible if it changes anything.

One thing I don't get is why Gamer's Nexus did Ryzen w/ SMT, Ryzen w/o SMT, and then Ryzen OC'd w/ SMT. We saw very nice gains when SMT was turned off, to the point that a stock 1800x was doing as good as a stock 7700k. I don't think it could keep up with an OC'd 7700k but hey.

Also, please show me a video to Ryzen "turning off" cores and OCing above 4ghz. Plz.

so you told me a bout a feature that by Ryzen's very own nature can not be activated...?

t-thanks... i guess Ryzen is the best guys. we could, in our imagination, disable cores and clock higher.

Thats one of the big issues with all the reviews at launch, there is plenty of preview articles from several sites mentioning the software, and its capabilities, and then no one fucking used it for their launch day reviews, which seems very odd.

It would seem a very interesting test to disable the 4 worst cores on a 1700X/1800X and then see how high you can clock the coolest running cores.

I've seen a lucky few on OCN hit clocks of 4.2-4.3 using full custom loops, but overall Ryzen 8 cores don't seem to clock too well on average..I think its a direct result of half-baked motherboard BIOS's, considering vendors had about 3 weeks to make them.

>Your last point is the exact reason why more cores and threads will become more relevant as time passes, meaning a Ryzen 8 core chip is a pretty decent long term investment especially for the price.
No. Just no. Read what Vulkan/nuOpenGL/nuDirect3D actually do and realize that only a minuscule portion of their advantage comes from their CPU multithreading and most of the speed up they achieved with them was with the method of bundling commands from the CPU into bundles of instructions now uploaded to the VRAM itself which by definition makes it a GPU job and an offloading of CPU jobs to the GPUs.

It's peformance and how much its going to be utilized via OS/Compilers/API's, will only improve over time.

What part of that concept is hard for you to grasp?...

>an x86-64 CPU
>not designed for compiling
hmm

you do realize that only like 1-2 cores are hitting 4.1 right? and you think a BIOS update is going to magically let you clock an exta 1ghz? An entire fucking 1ghz?

you sound like a shill. are you a shill?

>Bringing low resolution gaming up IS important. 4k gaming with a 1080ti makes it so your cpu might as well be a potato, or a shoe.
Another problem is that they don't even understand what they are reviewing at that point. If the GPU is identical and the CPUs are NOT capped then they do test something. But what they test is basic motherboard I/O or other I/O which is a metric but, it's chaotic and not really a very tangible test because most of the reviewers and testers have no fucking idea what exactly caused a speed up or a slow down in that case (and I guess even Intel/AMD may not know exactly).

Who said anything about 1GHz?..Who the fuck are you quoting?..

youtube.com/watch?v=nsDjx-tW_WQ So here is a bench, of Ryzen 1700 (3.9Ghz) vs 7700K (5Ghz) and its barely behind it in performance in a CPU bound situation..and its not even being utilized properly due to poor CPU optimization.

It's performance is going to increase, when Mobo makers get the ram situation under control, and when M$ gets the scheduler situation under control.

fuck my bad man, i meant to quote this tard

I wasn't him, just giving an explanation imo.

Barely behind? Did you watch that Tomb Raider opening?

Not even much of a gaymer fag, so I coudn't gell you what game it is where the dude is running with a gun, but @4.30 my jesus, 400fps compared to 250fps.

Did you also happen to look at the core utilization on the 1700?

Pretty sure there's some serious optimizations in order, the core scheduler doesn't even work in windows 10 properly, and Microsoft are working on a windows update to fix that.

So a gimped 1700 is barely trailing behind (in most cases, bar the one you pointed out) and its 1.1Ghz slower, and its cores aren't even being utilized properly...wow what a shit CPU! AMD BTFO!

>change BIOS settings every time I want to use a different kind of software
>buy 8 core CPU then deactivate 4 cores

Jesus you amd shills are so stupid. Just go with a 7700K...

>ignoring the mention of Ryzen Master

terrible idea. disabling half the cores just nullifies the one thing zen has going for it (good multithread scaling) in order to gain a 1-200mhz higher OC.

>core utilization
Not him, but that's burying your head in the sand and missing the big picture, those games have the best programmers in the world and they don't avoid utilizing 100% of 16 threads because they are suddenly stupid but because they simply can't or they may not be able to.

Most people don't get that in many cases it's WORSE to use the CPU rather than the GPU. GPUs are by definition parallel machines and they are better at doing anything parallel on them unless you are forced to not do it on them. With the newest GPU APIs even more so.

Even when people cite certain special cases, e.g. "my AI is complex and requires serial performance on a CPU", they don't get that in many cases the game or desktop software simply doesn't need that or even if it does it might not need it in a large extend.

Except all that is required to do this, is click a saved profile in Ryzen Master, then launch a game...if you are doing something else that makes use of the 8 cores, run a different profile.

>Pretty sure there's some serious optimizations in order, the core scheduler doesn't even work in windows 10 properly, and Microsoft are working on a windows update to fix that.

History repeats itself. Looks like AMD couldn't even make up new excuses to justify the shit performance, just reusing the same memes from the bulldozer era. anandtech.com/show/5448/the-bulldozer-scheduling-patch-tested/2

If there's any 'fix' it will happen a year or two from now when AMD gets a chance to respin. A magic BIOS update or changes to scheduling in the windows kernel won't fix the bad OC headroom or gaymen perf.

Why are you deflecting about how the GPU works?, when the video I posted, was a CPU bound situation, relying heavily on the performance of the CPU.

Yes, they do CPU cap there, but what you don't get is that most games do not use only a few cores because their programmers are suddenly stupid, but because their programmers know that it's more efficient to NOT do parallel jobs on the CPU because GPUs are MUCH better at them and even if you do find a job that must use a CPU (say a complex form of A.I.) they may not even do have that need in that game or even when they do they might not even need it in a heavy version.

turns out these low fps go high on windows 7, and the issue is win10 only.

expect fix and radically better benchmarks in days.

But isn't the whole point of a low-level API to offload some of the less necessary tasks the GPU would usually be bogged down by, to the CPU?

And since consoles are 8 core APU's, and optimizations for games seems to be heavily influenced by consoles, one would conclude that the programming in future will be spread out over multiple cores no? (That is assuming that dev's pull their finger out of their ass and make use of these resources that are available to them...highly hopeful I know..).

That's almost certainly wishful thinking of desperate AMD shills, because they only speculate on something someone on a random forum found in a hardware info script about CPU Level 2 and Level 3 Cache, and OSes do not have access to the Level 2 and Level 3 caches so even if you "fix it", you will only "fix" the info script, not the actual hardware.

>may not even do have that need in that game or even when they do they might not even need it in a heavy version
>may not even do have that need in that game
>may not even
>may
Goddamit, didn't someone already BTFO you the other day? All you have are small words without substance.

>But isn't the whole point of a low-level API to offload some of the less necessary tasks the GPU would usually be bogged down by, to the CPU?
No, absolutely not, see what the Vulkan/nuOpenGL/nuDirect3D APIs do. They do have methods to make it easier to CPU-multithread, but they also have now methods to take entire bundles of instructions the CPU gave, upload them on the VRAM BEFORE THE GAME EVEN STARTS and use those bundles from the GPU itself, and that's the main advantage you are seeing on those tests, definitely not the multithreading on the CPU level simply because those games/tests were never than CPU bound to begin with!

It actually surprised me how long it took them to realize that, because it's common sense to bundle CPU OpenGL/Direct3D instructions into batches pre-uploaded onto the GPU VRAM, they literally could have done that since the 90s.

It's not just a random offloading, it's simply much faster to have them already on the VRAM because VRAM is tremendously faster for working a GPU rather than having to feed it mid-game which goes without saying.

>may

Yes brother, I did. What that tells me is that what we all knew before. Video games aren't coded well, blah blah blah, most only tax the first core.

The same reason Ryzen will do good in synthetic 3d rendering, which if that would translate into video games... well you see my point. But games don't tax 8 cores now, they didn't 5 years ago, and they probably won't in 5 years. But I really, really hate all this future talk. I want to line in the NOW. I want to see what games it is optimized for NOW, not later.

The "optimizations in order" have to due with the architecture of the chips. Also, how is it gimped? It's gimped by it's own design. Why is it being 1.1ghz slower even a talking point? Why aren't we just comparing chips, A to B, without saying "And that one hits 5ghz sheesh I mean cmon!".

> I have no argument, let's just say randomly they were "BTFO" somewhere randomly nobody knows where but my stupiod delusions
you are a stupid child

>may

you stupid little child, find us your amazing examples that it works otherwise, or are you going to post "gentoo compiling" again?

> I have no argument, let's act like a little child for once more
busted

>may

Thanks for reminding me I won the argument, by devolving you into a stupid child, back to your mommy now.

>may
literally undermined your entire argument

No you stupid asshole, that was the entire point for the use of the word, that only a minority of those cases can heavily multithread on the CPU level efficiently.

Only stupid children believe the world works in absolutes, that something is supposedly only "100%" or "0%".

Sure you COULD make a game be 100% multithreaded but the real world called and call you stupid.

Nice try, Intel. You'd better get working on 8/16 Cannonlake.

To do what with it? Get a WORSE product? The programmers of Games and Workstation software are NOT stupid, they avoid using the CPU on PURPOSE because if they are anything short of stupid they know most of the parallelism is done on the GPU nowadays.

Hey guys, what's the maximum RAM speed ryzen supports?

>botched IOMMU
>still no mobo with ECC support
>workstation
AYYYYYY LMAO

> muh superior $1,000 cpu gainz
Bullshit. A regular 4c/8t Intel (or an AMD 4c/8t if they could bin it well I guess) is more than enough for most serious Workstations. What they fuck are you gonna claim?

That they are unstable or something? I haven't seen a single crash in decades on software that is half-decent.

Take your elitist bullshit outside kiddo.

>4C/8T workstation
Off yourself poorfag.

For all the /o/fags lurking, here's my comparison
Intel is that super-stock sports car you buy and never tune because it already performs well. It's got 8cyl with a solid supercharger, and you dont need to do anything extra to get it to be a go-fast machine.

AMD, on the other hand, is a cheap car that you tune. It's got a V8 but a poorly tuned one and without a turbo. You install your own turbo and tune it to run like the SS, but it takes time and work to do so.

r7 1700 costs as much as i7 7700k
r5 1600 would cost half as much as i7 7700k for 88% of the framerate.