Why don't CPUs have micro "corelets" for all those 2% cpu usage background processes?

Why don't CPUs have micro "corelets" for all those 2% cpu usage background processes?

>4 main cores with hyperthreading for applicationd
>16 tiny, in-order cores with a shared cache between them for networking, daemons, etc

Other urls found in this thread:

blog.acolyer.org/2016/04/26/the-linux-scheduler-a-decade-of-wasted-cores/
twitter.com/SFWRedditGifs

Apt image for this question, op

>what is CPU priority?

because they can just share a core

yeah I was thinking about APUs and thought "what if the baby GPU cores were x86 instead of GCN"

What makes you think that would provide any benefit?

What if...

But wouldn't it be more efficient to give a game 100% of the 4 cores it uses, and push Windows and background processes into a baby chip commune?

It's like having an eight core processor except smarter because you realize the last two cores are going to just handle tiny shit anyways so why not make them tiny

>Same performance in quad core applications as an eight core
>Cheaper because more yields

>what is big.LITTLE

It wouldn't help performance (no 'real' program can ever make 100% use of the cpu anyway), but it could help in power consumption.

you can literally just split the instructions up
also this means that if one process were to sleep, you wouldnt be wasting a perfectly good core

that's how consoles work. the PS4 example has a secondary baby processor for background tasks.

That's what the Cell architecture did

>What is ARM big.LITTLE?

for you

>no 'real' program caps the cpu
>>>ffmpeg
tell that to my conversion server

ARM big.LITTLE is exactly that.

>he doesn't know how to assign programs priority to a single core

a good kernel should be able to do this as well

You'd save virtually nothing since your processor is idling almost all the time anyways.

The only case where this would be relevant is for saving the context switch overhead in games and such, but even then the yields are tiny and there is too finite space and complexity on the die to waste on such terrible ideas.

The logic for figuring out what to offload is likely going to be similarly complex to a task scheduler.

Windows is kinda bad at scheduling, Linux definitely has it beat here.

Funnily enough, the Linux scheduler is better because it's simpler and doesn't try to do overcomplicated shit.

blog.acolyer.org/2016/04/26/the-linux-scheduler-a-decade-of-wasted-cores/

Delete this link right now.

...

>blog.acolyer.org/2016/04/26/the-linux-scheduler-a-decade-of-wasted-cores/

this is fucking AAA+ grade comedy gold

at least the graph is actually going up Jewtel shill

DELET

Saved.

lol fuck off with your apple shitty ideas

>Dedicating cores

Is inefficient design

simple, these are reject shit processes that may or may not need to be done.

windows update, even when turned off will sometimes turn its ass back on and rape a strong core to hell and back, meanwhile if it was on a small cluster of weak as fuck cpus, it would rape one of the weak as fuck cpus and never affect my performance.

When everything works correctly, sure, the 16 cores are useless, but said background processes are eating around 15-20% of my cpu when one isn't shitting the bed.

ARM big.LITTLE does this already.

The only real benefit is maybe power saving. If my PC is plugged into the wall, there is no point in saving however many percent energy. I'm not relying on a battery.

It's a waste of die space I think.

this, it's only relevant for power saving
otherwise you're just forcing background stuff to take longer than they might otherwise take

You're describing Bulldozer.
It did as well as you'd think.

No he's not. Not even close.

He's describing big.LITTLE of ARM

who said they don't ?
the sparc64 v XIfx has 34 cores where 2 are dedicated to the OS and bakcground processes

I'd imagine having an asymmetrical design would just make it difficult to schedule tasks properly.

But ARM CPUs have that, but they are RISC CPUs so it is easier to manage. X86 is way more complicated so it would end up being really expensive and really hard to make it work properly.

...

The problem is not the cores, the problem is the software, windows is well known to be bloated, but if you look at the task manager, most of its processes are idle only consuming memory, they will drawcall threads when they are needed and thats it, but they are still there and they will get in the way of other processes if needed.

>14% cpu usage at idle
>16% ram usage at idle

like I give a fuck

Wow
Lelnux truly is a bad meme

CPUs are already underutilized as they are, and why put all the effort into designing such a retarded architecture when you can simply just offer a model with extra real cores instead?

% cpu usage at idle
That's not idle.

I think we'll probably hit the limit on cores/threads somewhere around 16-20 threads and then we'll have to find another clever trick to improve general performance.