How come CPUs have been hovering around 3-4GHz for so long now?

How come CPUs have been hovering around 3-4GHz for so long now?

Is this a limitation of current processor technology? Does going over a certain frequency just trigger unreasonable heat and power consumption?

It's been almost 15 years and we're still hovering around the same frequency.

Other urls found in this thread:

hwbot.org/submission/2615355_the_stilt_cpu_frequency_fx_8370_8722.78_mhz
valid.x86.fr/bench/un65p1
userbenchmark.com/UserRun/2182913
twitter.com/NSFWRedditVideo

>Is this a limitation of current processor technology?
possibly yes and even if not, 99% of people don't need more
it's strange that people buy new cpus, buy only one and stick with it for 10 years

Because we've found a sort of sweet spot with frequency's. If we raise it much further then we consume alot more power, and generate alot more heat, it's just not worth it. Better to find other ways to increase efficiency and speed.

That thing is unnervingly black

Why are they so ugly?

There is a soft cap where it is impractical to push CPUs any faster. That cap usually hovers anywhere from 3.5-4.5GHz. Any higher, and you start generating a lot more heat and using a lot more power. This is also why multi core processors exist today.

You mean the young lady?

Intel kaby lake CPUs actually reach 5GHz pretty reliably for 24/7 use under air cooling. About 80% of them reach 4,9GHz, you're pretty much guaranteed to reach the magic number of 5,0 if you delid and use a huge air cooler or some AIO.
The thing is you can't just keep doing "moar gigahurtz". Some operations can't really be reliably done faster anymore under regular conditions. Adding even more clock cycles per second is not very feasible anymore, what's important now is how much the CPU is capable of doing within these clock cycles.

Since we're approaching the limits of silicon and the end of Moore's law, will CPUs of the future improve by using a different material to reach higher clock speeds instead of just shrinking the process every 18 months?

You still need a pretty big cooler though.

It's not fx9590 hot, but it's still hot.

>99% of people don't need more
wew
if the limit was 3 MHz, you'd be aaying the same thing

You just have inferior taste

but why tho

No, lol, when cpus were slow, it was obvious that they were slow, noone thought like "I don't need more powerfull cpus". No most people think like that

The faster you make the cores, the hotter they get.

They could make mass market chips above 4ghz, but they would need massive cooling solutions like water cooled. Apple tried a line of water cooled CPUs for the mass market some years back. Why they needed them, I have no idea, they weren't that fast. More Apple bullshit. Problem is that water cooled solutions break down eventually. Moving parts. It's fucked. Maybe Apple wanted that.

If you get a good chip, you can use watercooling to overclock it to 8+Ghz. Wait, did I say water? I mean liquid nitrogen.

hwbot.org/submission/2615355_the_stilt_cpu_frequency_fx_8370_8722.78_mhz

Anything better than aircooling is a pain in the ass. Aircooling will only get you to about 4ghz. Most mainstream, that is shitty, aircooling will get you to around 2ghz which is why there's all these chips at 2ghz.

So why is 4ghz the limit? Air cooling can only go so far.

>it's strange that people buy new cpus, buy only one and stick with it for 10 years
agreed

But why is that though? Is it just a limitation of modern technology? How come we have 95w CPUs anywhere between 3GHz and 4.5GHz? What determines these?

I upgraded from a q8300 to an i7 3770 in January. Did I do well?

already been answered but it's diminishing returns. no real need for it. more energy used and therefore more heat produced. it's infeasible to cool most higher-clocked CPUs with only air cooling.

will be interesting to see how quantum computing comes along, but that currently needs even more drastic cooling methods than the best CPUs do now.

pic related, how they currently cool a quantum CPU (to about 15 millikelvin, or -460 F)

It's because that's the frequency of the universe. Going faster than that creates mega energy that'll be too difficult to cool down.

>How come the speed of light hasn't changed in the last 10 years? Duuurrrrrrrr.

spectrum dot ieee dot org slash computing slash hardware slashwhy-cpu-frequency-stalled


>So why not push the clock faster? Because it's no longer worth the cost in terms of power consumed and heat dissipated. Intel calls the speed/power tradeoff a ”fundamental theorem of multicore processors”—and that's the reason it makes sense to use two or more processing areas, or cores, on a single chip.

>Intel reports that underclocking a single core by 20 percent saves half the power while sacrificing just 13 percent of the performance. That means that if you divide the work between two cores running at an 80 percent clock rate, you get 73 percent better performance for the same power. And the heat is dissipated at two points rather than one. So even though the cutting-edge logic chip gulps ever more power [see graph, center], it isn't about to melt its way through the floor.

>The rising power consumption of CPUs [graph, center] made it less attractive to focus on cycles per second, so clock rates stalled . A better gauge of performance, the number of instructions performed per second, continued to rise without betraying any hint of the stall.

Stop posting farm animals.

That's not a very good analogy

Even if we managed to reach 30-40GHz, you'd probably be still complaining.

yes

>high cheek bones
>nice nose eyes
better looking than most white womyn, but she's just way too black.

Pretty anime features aside from the skin tone.

When you send electricity through a wire, there is resistance. That resistance causes heat. Unless the CPU has zero resistance, there will be heat.

The heat builds up if it's not dissipated. If you don't dissipate the heat fast enough, the wire gets hot enough that it melts.

Modern CPUs have a failsafe. They will shut off before they melt. They will usually slow down long before this. In the old days, you could easily melt your CPU if you fucked up.

More clock speed = more electricity = more resistance = more heat

So you either get rid of the heat or slow the chip down or both.

What I'm describing is exactly what happens in an incandescent light bulb. The wire heats up so hot that it's glowing. It's just before melting temp, expending huge amounts of heat and light. This is what's happening in the CPU on a tiny scale. We're talking wires so tiny that even small amounts of heat need to be dissipated if you don't want it to melt.

The little electrons hit all sorts of other atoms inside the wire as they move through it. Those collisions cause energy to be transferred from the electron to the materials in the wire. That energy can build up and melt the wire if you don't dissipate it.

Have you ever felt an electrical cord heat up as you send power through it? This is what I'm talking about here.

The 4ghz limit is just the place where most heatsinks fail the CPU. It's a physical law. Actually, for most shitty rigs, 2ghz is the place where heatsinks start to fail. This is why multicore is a good solution. You don't have one core constantly running, heating up and heating up. You have lots of slow cores revving up and down, resting, cooling.

Dennard scaling

muh DIK

god damn

shes fucking beautiful

Physical limits. Graphene might be a suitable replacement for electronics and maybe have vastly superior characteristics.

>88


what did kek mean by this?

>Modern CPUs have a failsafe. They will shut off before they melt. They will usually slow down long before this. In the old days, you could easily melt your CPU if you fucked up.

One more thing I want to say about this. Modern CPUs automatically slow down when they are getting too hot. One consequence of this is a lot of users have some really shitty rigs with bad cooling and they have no idea. Their CPUs are throttling down all the time and they have no idea. You need to do some quite extensive testing to figure out if your cooling solution is good. My personal opinion, around 1ghz is about as fast as most users can really truly go. That is, with sustained full load. Most applications don't employ a sustained full load so it doesn't matter too much. A super fast CPU will chug at full speed for awhile before it needs to throttle down, so it is good to get a fast CPU in most cases.

Point is, 4ghz is way way way overkill for the majority of users. 2ghz is plenty. Better to have more cores.

>A better gauge of performance, the number of instructions performed per second, continued to rise without betraying any hint of the stall.
Hey if all that matters is instructions per second why not just replace your CPU with a GPU.

niggers aren't human

>But why is that though?
Basically you need to use more power to flip a transistor faster, and this excess energy has to go somewhere and that somewhere is heat.
Until we get room-temp superconductors we're pretty much stuck with what we've got.

She's ok, she's like a 7; cute, but not stunning

>My personal opinion, around 1ghz is about as fast as most users can really truly go.
Y-you what?
CPU should reach max temp in about a minute under full load and the vast majority of builds don't reach throttling temp in a minute at >1ghz.

>Better to have more cores.
More cores is fucking pointless if you're waiting on a long line of necessarily serial computation.

>But why is that though? Is it just a limitation of modern technology? How come we have 95w CPUs anywhere between 3GHz and 4.5GHz? What determines these?

GHz is gigahertz or billion cycles per second, so the thing is there is a physical crystal that must vibrate at these speeds. It is physically moving 5 billion times per second at 5GHz, there is a limit to how fast things can move. 200GHz would be too fast for any physical material to vibrate, it's like going near the speed of light there is a limit to how fast something can go in the physical universe and we are approaching how fast you can make a crystal vibrate

it's not a theoretical wave or something moving at 5GHz, it's a physical crystal

>no real need for it.

there is a need for it, mankind will consume every computer cycle it can get it's hands on, if terahertz computers were possible and around they would be being maxed out right now

Gotta decompress those illegally downloaded zip files as quickly as we can

Can't you afford having two CPUs?

This.

with unlimited resources there is no telling what we could do, if the physical limits were at 100mhz we would not know about high def digitial video or very much else

if we had unlimited computer power we could monitor your health down to the micron at all times and prevent many diseases, we would have self driving cars and tons of other unthinkable automation and benefits in cancer research and other fields that use super computers on timeshare currently

there is no end to our use for computer cycles and it would not just be for banal shit

>disgusting gorilla lips
into the trash

I'm gonna have to disagree with
I've had a core 2 quad 3.0 ghz for 7 years
I'm really starting to see a bottleneck
But I guess its also the 4 gb of ddr2 800
that shit doesn't cut it anymore either
But thats gaming
this computer does great for just browsing the internet and doing word or other shit

>possibly yes and even if not, 99% of people don't need more
Are you a fucking intel shill trying to justify your stalled performance? You sound like that IBM guy "there will never be a market for more than a dozen computers in the US".
No, you faggot, serial compute is absolutely necessary and of prime importance for a CPU. We have GPU for massively parallel tasks, so we don't need 16 core CPUs.

Two reasons why clock speeds cannot go up forever is because of the timing of the logic gates inside the chip and the heat generator. If the clock cycle goes too fast, you'll end up with a circuit where your clock cycles go faster than it can propagate through the logic gates, which if one clock cycle perform an action which should be used by the next clock cycle, you'll end up having to wait multiple clock cycle until it comes through.

we are 100% black

Shitloads of makeup, darker black the better though. Stop having plebian taste

You wouldn't fuck a coal

>If the clock cycle goes too fast, you'll end up with a circuit where your clock cycles go faster than it can propagate through the logic gates, which if one clock cycle perform an action which should be used by the next clock cycle, you'll end up having to wait multiple clock cycle until it comes through.
We're nowhere near that level though

>but why tho
[1/2]
It's due to the nature of the circuits themselves. A MOSFET changes it's channel resistance mostly based on gate to source voltage. At a certain point this is considered on and at another it's considered off. The gate is input. The gate is fed by another transistor and it looks like a very large resistor ( around 10M if i remember right) and a really small capacitor.

In the on state a MOSFET ideally has 0 resistance and in the off state infinite resistance. In the 0 resistance state there is no voltage drop across it and the power dissipated is ideally 0 because power is current * voltage. In the off state there is no current so power is again 0.

The problem is that you can't switch states immediately. It will spend some time in between and it will have current and voltage and therefore power. The resistance will dissipate power so switching is the major cause of dissipation in a working CPU. The gate resistance is large and practically an open circuit. The capacitor has to be charged every time though and even though it is small it is significant when you're switching at the GHz level.

The when the clock signal cycles a big boolean function calculates the next state based on the current state and the current input. This boolean function is implemented as transistors. This calculation involves the potential for a state change and therefore the potential for charging or discharging a capacitors. Every one of these changes has to occur before the next clock cycle which means those capacitors have to be charged quickly. This means that a higher clock means more time spent in neither on or off in the transistors and more power spent charging and discharging parasitic capacitance.

Then add to the fact that MOSFET design involves power-switching speed trade offs. As mentioned earlier high channel conductivity in the on state and 0 channel current during the off state are considered good for power consumption. You can choose MOSFET design parameters to optimize for this. However many of these make the MOSFET slower.

Longer channel length for example makes the MOSFET leakage current during the off state lower. In the off state the field from the gate fucks with the charge carriers and stops current. A longer channel gives more room for this to happen. A long resistor has more resistance. However a longer channel leads to higher swtiching times. Fields have to propagate at a finite speed and a longer channel means more time.

There's probably hundreds of these little tradeoffs and they all add up to power dissipation increasing with clock frequency. Even worse it's not a linear increase. It starts to run away at around 3.5GHz.

Can modularity resolve this?

What do you mean by modularity?

Yeah DDR3 was a massive leap, it runs as fast or faster than the L2 cache in the Core 2s

How come you've been hovering on this board for so long now?

It's been years and you're still hovering with increased frequency.

Oh, you know I would

What bothers me the most is her clothing choices and the chipping nail polish. I think I might be gay.

You are now aware that between early 3 Ghz CPUs and modern 3 Ghz CPUs is a difference of about x32 thanks to multiscalar architectures, heuristics and memory connection improvements.

>You are now aware that between early 3 Ghz CPUs and modern 3 Ghz CPUs is a difference of about x32
do you honestly believe this

Yes, because it's true.
There was a Phoronix benchmark about half a year ago.

5% per new generation has only been going on for a few years.

What bothers me is she's a disgusting nigger.

Haha wow look at this faggot

What sorcery allowed my 2500k to oc to 4.8 simply bybl upping the multiplier in the bios without changing anything else on a $50 air cooler

What voltage?

This

Performance per watt is the key metric here.

I don't remember. I never changed it from default/auto in the bios. Like I said, all I did was go into the bios and up the multiplier until it was a 4.8ghz. Literal 10 second job.

Automatic overclock often puts a lot of voltage on the CPU. You may be killing it slowly if it's above 1.35V

Even a 5ghz 7700k gets up to 98-100% cpu usage on all threads for some games at under 144fps.
The fight is over. You 4 core faggots lost.

5ghz is not enough for 4 cores.
It's hard to reach 6ghz or to increase the true-IPC by 20% so it makes more sense to just increase cores by 50%. Which is what Coffeelake is doing.

7700k is a space heater.

he's right though, maybe not x32 if you factor out the whizzbang tricks but x10 definitely even just in raw performance

My temps average 65c during gaming sessions and 50c when idle at 4.8. Is that too much?

Real IPC/s has hardly increased since Ivy Bridge.

It's increased through shit like AVX256 which is just bullshit when it doesn't increase for more typical ops.

>It's increased through shit like AVX256 which is just bullshit when it doesn't increase for more typical ops.

I don't much about this but just wanted to confirm that vector math instructions are pretty much useless for consumer software right? Like I mean not some specialized hardware built for a specific platform but rather say a video game because they can't target a specific CPU extension since it may be unsupported on the target computer and the program will just not work?

Because we learned a long fuckin time ago that higher clocks doesnt make a better processor.

>5%
lmfao it's been less than 3% per year since Ivy Bridge.

AVX and AVX2 are useful in consumer software, but only to a small degree. It's usually stuff that's offloaded to the GPU instead that doesn't affect gameplay.

AVX256 and AVX512 are not.

>need to do some quite extensive testing to figure out if your cooling solution is good

>run hwmonitor, cpu-z, and prime95 at the same time
it's not that hard man!

Nothing wrong in being an oldfag, I've been on Sup Forums in the year of its launch, first year high school.

DDR4 was mainly introduced to address the row hammer exploit.

Can technology resolve the limitation of the binary nature of MOSFETs through the modulation of energy?

>When you send electricity through a wire, there is resistance. That resistance causes heat. Unless the CPU has zero resistance, there will be heat.

This is not entirely true. While electrical resistance is a factor. There is also an issue of cost of switching the transistors between their logical state. There is a limited speed at which this can happen, and the more often that does happen, the more heat is generated.

>Ivy

Ivy was just a die shrink of Sandy and it didn't even OC as well. And if we're talking IPC, even Sandy wasn't that great an improvement over Nehalem. It mainly clocked a lot higher.

AVX should definitely be excluded from IPC comparisons. Maybe even SSE.

I made a small test a few days ago. I Cinebenched an opteron 2212 against 5820k at the same clock.

In single thread Opteron was ~3 times slower.

>99% of people don't need more
good try shekelstein

Incorrect. More clock is always better, all other being equal. The issue is that we are at the clocking boundary for silicon designs. However it is not out of question that a hypothetical XYZ material will allow to push the frequency further.

I remember that about 1998 I read an article that placed the practical boundry at about 2.5-3ghz and it was theorized it was going to be reached by 2006-2007. Lo and behold. Clock rates have pretty much stalled over the last decade and in some cases, regressed.

In mid 90s highest clocked CPUs were server/workstation ones. DEC Alphas reached 600mhz when x86 toys were struggling to reach 300. These days server cpus are lowest clocked ones.

Yes. 2-3ghz is still optimal for power efficiency.

After that you tend to put in many times more voltage for the given increase in performance.

But frankly, as a programmer, I think the biggest problem is programming.
You have fp32 used when fp16 is sufficient, for example. That's double the energy used and half the performance to get the given result.

>Incorrect
No, correct.
Clock speeds alone do not make a processor better. Yes, obviously, if you take two identical chips and clock one higher it will perform better, but muh ghz is not a good metric of actual performance.

If we had the hypothetical capacity to operate on a monolithic single core clocked to infinity, we would be using that solution within a single thread instead of distributed several cores.

50c idle is quite high, try lowering by 10c change the thermal paste get a nhd15, probably hitting higher than 65c on full load. approx or under 1.3v and under 70c usually, heat degrades the silicon more than voltage.

Haha wow look at this nigger loving faggot.

He fucking said >all other being equal

If you were on a deserted island...

>Implying you'd not feed her the D until it fell off.
Stay virgin

I ask this a lot myself ... still no logical answer.
My Prescott 2.4 pulled 3.6ghz and I though that was amazing till my current system. 3930k water cooled pulls 5Ghz 125mhz x 40multiplier, matching qpi to clock. It fail with older graphics cards like 550ti but works with my R9 390. At 5Ghz my 3930k smashes other cpus into bits. I honestly think 10Ghz is what we should be shooting for by 2020.

My 3930k
valid.x86.fr/bench/un65p1
userbenchmark.com/UserRun/2182913