Le I only need 100% multiprocessing le meme

>Le I only need 100% multiprocessing le meme
Serial performance will be always required on interactive applications.
Unless you never actual sit on a computer and only encode x265.

Layman people have the delusion it's a matter of progress.
It's not a matter of progress, it's a mathematical fact you will always require serial performance on CPUs.
This is because interactive application must always adhere to the rules of a global loop and that must always be protected with inefficient locks in contrast to purely non-interactive rendering.

You can ask all you want for your next game to be "100% multiprocessed" but it will never happen.
It's how mathematics works.
You will always need strong threads not only in gaming, but in anything that requires human input, including setting up your Photoshop before you actually render, which is 99% of the time you actually use a computer

Other urls found in this thread:

valid.x86.fr/bench/rjmzdu/1
twitter.com/SFWRedditGifs

are you upset that nobody has replied to your bait thread?

not sure if you're stupid or very stupid or just incredibly smart.

PS. I bet most games or most interactive applications can't go above 10% or 20% max parallelization. On that graph 10% is almost nothing. 20% is also almost nothing.

well its not a choice, single core performance can only go so high, its limited by the laws of physics. most consumer based items have perfect hardware, they only need to improve software.

However very large computationally expensive tasks cannot be done any way. for example, I run calculations on a 500 cpu supercomputer, it has an estimated passmark speed of 5,000,000, there is absolutely no chance that this could be replaced by a single core.

Plus most games that run on multiple cores feel alot faster

Let's just wait and see how well these "Laws" work in the real world

Remember Moore's law?

>well its not a choice, single core performance can only go so high, its limited by the laws of physics. most consumer based items have perfect hardware, they only need to improve software.
It's hard to shrink the transistor nowadays. The photolithography difficulties alone are immense. We'll soon be hitting severe quantum limitations as well.
>However very large computationally expensive tasks cannot be done any way. for example, I run calculations on a 500 cpu supercomputer, it has an estimated passmark speed of 5,000,000, there is absolutely no chance that this could be replaced by a single core.
>
>Plus most games that run on multiple cores feel alot faster
It was mainly a response to those that claim they can get away without single core performance on their Desktops. They have the delusion they can create interactivity without any single core performance. That is simply impossible because a global loop with inefficient locks will always govern an interactive multithreaded application, unless they want to imply the 1% of time they actually use on their desktops to encode something once in a whole is all they actually bought a Desktop for.

Moore's law is not science like Amdal's Law, Moore's Law is used as an advertisement slogan of Intel and it's dead since the first dual core since more cores does not mean double performance on CPUs for the common desktop. I know, "that's not the definition of it", but that's what Intel eventually promoted it to be for advertisement. Only GPUs deserve to use such a law because they are by definition parallel machines, CPUs are by definition serial machines and they will always be largely serial machines unless we entirely change computer paradigm beyond the regular silicon and its brothers.

but everyone understands that 2 cores don't perform like 1 core, it's just faster than 1 core to by about 95% that is not a huge loss.

This isn't Bulldozer. Ryzen is competitive in single thread performance, within 10-20%. Intel is not competitive with Ryzen's multithreaded performance, being up to 50% slower for the price.

Is this a difficult concept for you? 10 is a smaller number than 50.

Only if the application if fully paralelized and in real-life that means almost entirely non-interactive. Look at the graph, if it's only partially paralelized, the gains are very little. In practice and for the common desktop, it's only near 95% if you are not touching the computer at all at the time, because if at the time of processing you actually input stuff to it (e.g. gaming, or actually using your Photoshop or any other program that needs any input) then you must adhere to the rules of a global loop that governs the program and that must be filled with certain locks that protect all kinds of shared variables in order to avoid segfaulting the application.

Nobody is talking about a meme war against companies here. Both Intel and AMD have this problem. Desktop applications that require constant human input at the time of processing can never go near 100% paralelization because that's the nature of their operation, they do not work based on offline data like in rendering, they actually create the data at the time of operation and interactive applications must keep coherency based on a global loop filled with protections of variables.

The $500 Ryzen that everyone shills today is 25% slower per core than a $250 Intel.

My 4.6GHz i3 6100 is faster per core than an i7 7700K at stock. Does that mean it's fair to say my $120 CPU is faster than Intel's $300 CPU, or will you admit that's a stupid comparison?

Cache plays a role and your CPU has very little. Also, with only 2 cores, your gains are very little from paralelization. But at 6 cores you enter overkil zone and 4 true cores the gains are very high.
Intel knows what they are doing technologically.
It's a stupid meme that they don't know the science.

Test it for yourself, take solid multi-threaded program, limit it to one, CPU and then limit it to 2 CPUs and see how much actual threading is done in such program.

>But at 6 cores
That is not what Amdahl's law says, why do you people like to miss represent laws of diminishing returns?

>Get called out on relative single threaded performance being a misleading benchmark because higher-core-count CPUs are clocked lower
>Spin off into unrelated bullshit
The point is your "$250 Intel beats $500 Ryzen!!1" argument is dumb at best and maliciously misleading at worst. Presumably, since heat appears to be the main limiting factor in bringing Ryzen to 5GHz and above, smaller Ryzen chips like the 4 core variants will clock higher and have a smaller performance gap in single threaded work.

You have the delusion that a common Desktop can go above 20% parallelization during interactivity. That's a stupid meme. Only if you don't touch the input at all at the time you can even pretend to approach 50% or more.

I agree, that's why I buy dual cores and overclock them, I literally have the same performance as $350 CPUs in games.

Proof is in the budding, you are showing no proof of your claims, and still miss represent laws of diminishing returns, with actually, Amdahls Law is not part of I just realized, it's linear law.

That does have a point, but, I believe it gets screwed up by another completely different factor sometimes.The cache is just too little on those CPUs. e.g. if you had the 8MB of cache of an i7 on a dual core, than a lot of games would be immensely faster.

MOAR COARS COMPLETELY BTFO

Some laws can be broken. Remember Moore's Law. T_T

Open task manager. Go to processes. I would bet the fucking planet you have more than 6 processes running.

If you're using a single program that does one thing and relies entirely on user input, your misapplication of math is correct. But nothing works like that. Physics engines benefit from more parallel processing, any sort of encoding or decoding, any sort of cryptography, any etc etc. And all those functions WILL OCCUR WITHIN PROGRAMS.

The burden of proof is on you since we know AMD themselves can only shill their $500 CPU on purely offline rendering benchmarks. When they use something like gaming they are careful to doctor it by using a 3.2Ghz $1,000 Intel that nobody uses. Try using a 4 true-core Intel or a 6 true-core Intel that costs $500 or much less than the AMD and see.

This is not like Moore's Law, this is just a way to calculate speed at witch threads will finish, based on effectiveness of threads.

see

So you've got the ryzen that, at the same price, is slightly slower than a mainstream i7 in single-threaded tasks but, at half the price, is up to par with the broadwell-e line in multithreaded tasks. You're comparing the wrong processors here.

>intel damange control

I can do it, but burden of proof is on you, because you made a claim "20% max" and didn't show anything to it.

>Open task manager.
Task Manager proves nothing. I can run 256 threads on 100% usage each. It does not mean I did it efficiently.

I have about 60 alone with just chrome open.

>calculate speed at witch threads will finish
Finish what though?

-E is a luxury product line that nobody buys here. You give 70% more to get 10% more max. The majority of people do not go above $250-350 and that's where the real war for the Desktop is played at.
AMD knows what they are doing in shilling.
But also Intel knows the science.

No, user, you are the retard. Ryzen 7 exists to compete with the higher end six and eight core Haswell-E processors, and to make that niche (streaming, encoding, rendering, compiling) more accessible. It was never designed to be a high clockspeed four core chip, OR IT WOULD BE ONE. THEY CALL THAT RYZEN 5. You should be comparing the 4-core i7s to the 4 and 6 core Ryzens when AMD decides to get off their ass and release them.

moore's law was just a loosey goose industry rule-of-thumb

You don't understand economics in the fucking slightest. Yes, Haswell E is a luxury most people can't afford.

Guess what?

Now they can. It's called Ryzen 7. Eat shit.

you actually think most of those processes are actively processing anything and not just waiting to be called so they can shit out output for a microsecond before returning to slumber.


.toplel

>muh price class bullshit
just because they're at roughly the same price doesn't me they're meant to compete with each other. Just because a tractor trailer can cost just as much as a luxury car doesn't mean they're direct competitors.

>You will always need strong threads not only in gaming, including setting up your Photoshop before you actually render, which is 99% of the time you actually use a computer

This is very true, but many seem to forget that you never ever only run a single thing on a computer.
Even if you only actively use Photoshop at that time, you probably have a browser open, maybe an audio player and an image viewer. Another such activity would be compiling something while doing something else or running multiple virtual machines for example.
While multi threading a single program may not be as effective as we want it to be, it allows you to do many more things in parallel, even when not actively using it, without taking performance from your main activity.

INTEL SHILLS BTFO
HOW CAN -E EVER RECOVER?

Let's be honest here. Do you think the nerfed AMD is going to compete when their $500 loses against $400 Intels already for most realistic tasks? Most people don't use a computer to only encode HEVC, or to only see the AMD CEO look at a cinebench simulation without touching a computer, they use it on stuff that can't parallelize more than 20% max for 99% of the time they are on a computer.

really winning people over with your eat shit stuff

you clearly are over 18

>I have no argument

Let's be honest here, you're intentionally comparing the wrong processors.

>I have officially ran out of arguments after accidentlly conceding AMD has sidestepped intel by disregarding the fake "consumer high end" entirely and just released a $500 workstation chip
>I got called out on it, better pretend this isn't Sup Forums and act offended so I win :(

Have fun explaining how 4cores are 3.5% effective with 30% effective multi threading.

For not caring about price/performance ratio, Intel shills sure seem to be obsessed with pricing.

For the nTH time, that's where R5 and R3 come into play. Gentoo fucking Christ, its been less than a week and your memes are already stale.

Shills my man, they think they're fooling somebody.

I'm not touching a $500 CPU with a stick. Sorry, but I don't care how much you love to shill AMD, a $500 X1800 is just not a Desktop product, it's a niche luxury product. The $200-300 range is where the real war is played at.

>Intelfags are poor on demand

Lel

I take it reading isn't your strong suite, so I'll use little words for you.

Big Intel chip loses to big AMD chip. But big chips are slow because they are too big! So little Intel chip beats both big AMD and big Intel chip in tasks where little processors do good!

Now, little Timmy, how do you think the little AMD chip will perform?

>tfw lock free algorithms and data structures exist
feels good man

I play SuperPi all day too.

Most of the time though most of the stuff just sleeps in the background. Nowadays, they are not only low usage, they 0% usage because the OSes are smart and do it on their own. Even a simple browser like Chrome does it on its own, e.g. if only the current tab may actually use CPU resources in most cases.

Are. You. Fucking. Dense.

If you aren't buying a $500 chip anyways, WHY ARE YOU ARGUING ABOUT THE PERFORMANCE OF THE $500 CHIP INSTEAD OF WAITING TO SEE HOW THE $200 CHIP PERFORMS?

HOW FUCKING DENSE IS IT POSSIBLE TO BE

>niche luxury product
And somehow, you can't say the same for the r7 1700 despite having the same performance capability of slightly reduced single-threaded performance for a strong multi-threaded performance.

So that means we're no longer talking about i7s right?

Then get ready for a 1400x ($200) to blow the fuck out of everything before the end of Q2.

That's the stupidest meme I heard in this thread to date. Literally 90% of Sup Forums is on Intel CPUs. Literally 50% or more on those do sound and economical solutions that do not exceed the $250 range.
I'm almost certain those that go above a 4-core i7 do not exceed the 1% or less.
The notion that a $1,000 Intel CPU that runs on 3.2Ghz is common is inane.

>b-but muh cockspeed and 4 coarz

go back to your mommy to cry.

also fags who need moar coars just buy second hand xeons

can you go cry to your mommy and not here?

>getting beaten by an older architecture with a 1GHz clock advantage

I can't wait to buy those 2 core pentiums and OCing them for gaming!

That proves OP's point. The 7700K, a common Sup Forums chip people actually buy is only a minor downgrade from the $1,000 chip. The notion that a $1,000 chip that runs on 3.2Ghz is a common Sup Forums choice is inane.

>The notion that a $1,000 chip that runs on 3.2Ghz is a common Sup Forums choice is inane.
Obviously.
Though now we have Zen, which basically is a $1000 chip for ~$400

Good thing we have much cheaper AMD chips with equal performance or better, there, huh? :)

But yeah, who needs OCing anyway? It's not like everyone was running 7700k at 5.3GHz easily yesterday.

>That disproves OP's point. The 7700K, a common Sup Forums chip people actually buy is a downgrade from the $1,000 chip which also gets beat by a $330 chip. This is why we see butthurt from Intel fags everyday. The notion that a $1,000 chip that runs on 3.2 Ghz is a common Sup Forums choice doesn't fucking matter now that AMD is doing it better and for less than the price of a 7700k.

I like it how the older 3.2GHz chip beats a newer 4.2GHz chip in games, which were __APPARENTLY__ only running up to 4 cores.

Quite funny, isn't it?

They are not butthurt. They laugh at you. Also it would improve AMD more if you actually expose their hypocrisy.
This meme war that supposedly a $1,000 Intel chip on 3.2Ghz is anything realistic for Sup Forums only hurts your cause.
This will help Intel actually because AMD will flop again. Help AMD by telling them where they are full of shit.

They also have more cache. It's not just clocks.

They have 20MB cache against 8MB.

Of course it is, no amount of clockspeed(outside of LN2) can replace cores when cores are needed.
Games have been using more than 4 cores for a while, it's a good part thanks to consoles ironically.

Still, shit ports like Batman and like can't be saved if they're running on 20 cores.

>This meme war that supposedly a $1,000 Intel chip on 3.2Ghz is anything realistic for Sup Forums only hurts your cause.
Are you autistic?
Truly, you must be autistic to keep using this line
Of course a $1000 chip isn't common for ANYONE. But that's likely to change here real soon when you increase it's performance and drop it's price to below that of a 7700k.
and besides that

WHO THE FUCK CARES ABOUT Sup Forums?

So is this a chicken/egg problem? People not running Broadwell-E because it's cost prohibitive or Broadwell-E is cost prohibitive because people aren't going to run Broadwell-E anyway and need to have them priced $1000 in order to have adequate returns?

Yeah, part of the chip obviously, high core count chips without appropriate L3 is dumb.
L3 eats up that slight IPC difference Skylake has over Broadwell.
Too bad there isn't a 8 core Skylake with 20MB+ cache out, huh?

cry to your mommy, not here.

Intelfags, what happened to performance at all cost? Weren't you all running Titan XPs? Are you poor?
Can afford a $1000+ GPU but not a $400 CPU?

>no argument: the post

Larger caches are slower, for any SERIAL workload the larger cache won't help all that much.

no

Yes, 8MB l3 has around 8 clock cycles access time on a newer Intel, a 20MB l3 has around 16.
Latency is everything to IPC.

Also I forgot to mention big caches aren't built for performance, they're built for density, so they're usually smaller instead of wide smaller caches.

OP should say that when they release 10 core 6950X.

What if it's two completely different processes instead of one parallelized process?

Literally nobody cares about $1000 CPUs

Yeah, the new big deal is $330 chips that perform better than $1000 chips

I like the $330 overclocked chips that perform better than a $1800 chip scenario more actually

>$330 overclocked chips that perform better than a $1800 chip
retard

I wouldn't go that far, but I'd fucking love for that to be the case just for the tears.

Literally one days time will tell.

If the 1700 can overclock to 4.0 then it's gonna be faster than a 6950X UNLESS the software scales almost linearly with cores(very rare)

Even if it was 5% slower it's still 6 times cheaper

Most of the time and for most real desktop cases, the second process does nothing at all. e.g. a Photoshop with an Office, the Office will be 0% use if Photoshop is up. Or a Chrome with 100 tabs, most of the time 99 tabs will be nearly 0% used.

valid.x86.fr/bench/rjmzdu/1
I'm so sorry Intel

Sorry

>Cache plays a role and your CPU has very little.
So you're saying Ryzen will beat Intel?

It's 1 LN2 sample against the average. Hold your horses. We'll know for sure when reliable reviewers enter the benchmarking race.

ITT another ayymd and jewtler thread even if OP just posted a CS bullshit.

/neo g/

>Desktop applications that require constant human input at the time of processing can never go near 100% paralelization
Neither do they need any significant processing power because they spend 99.999% of the available processor time waiting for human input.
In other words, it doesn't matter if some parts of the program cannot be parallelized, what matters is parallelization of the actually computationally heavy tasks within the program. For the majority of home users those are game engines, video processing and encryption that parallelize perfectly well, and everything else is easily handled by a single core from 10 years ago.

What if I press two or more buttons at the same time? Check and fucking mate intel op.

i7-980 here, sure feels good to be at the top of the pack still with my hexicore. ;)

Bought it used for $60 on Kijiji as an upgrade for my i5-750, honestly am really hoping Ryzen gives me a reason to upgrade. Because of Intel being jews the market has stagnated for so long I'm surprised anyone has even been buying new the last few years.