Is "2600k will last forever" just a meme? If CPUs have improved 10% each generation since 2600k...

Is "2600k will last forever" just a meme? If CPUs have improved 10% each generation since 2600k, wouldn't a 7700k be quite a bit better and not just marginally?

Other urls found in this thread:

youtube.com/watch?v=tPBtXUUeFK0
tomshardware.com/news/Overclock-Intel-Celeron-TiN-CPU,9509.html
hardocp.com/article/2017/01/13/kaby_lake_7700k_vs_sandy_bridge_2600k_ipc_review
anyforums.com/
twitter.com/AnonBabble

No because no germanium

>If CPUs have improved 10% each generation since 2600k,

The majority of performance gains since sandy have been due to higher stock clocks

1990's
>All that matters is clock speed

2000's
>Yo dawg cores are all that matters

2010's
>We were only pretending. Clock speed is best.

>1990's
>>All that matters is clock speed

>2000's
>>Yo dawg cores are all that matters

youtube.com/watch?v=tPBtXUUeFK0

multi core is great, if devs actually bother optimizing for it.

Unfortunately most are too lazy and we get shit that throttles CPU0 and doesn't touch the other cores.

Multicore is still great for tasks that can't be parallelized efficiently on a GPU but for more than a single core.

If you overclock them it becomes minimal.

When the consumer dual cores came out I remember all this talk about tools that automatically allowed a program to use multiple cores. What happened to this stuff?

You're looking at a 20-25% improvement if you upgrade from a 2600k at stock speeds. Still not worth it.
It was a pipe dream. Never happened.

In games the difference is only between 0-10% mostly and in some titles 20% at best in fps.

have you acctually tried multi threading your programs? shit is still too complicated imo

It needs to be implemented on the level of the engine / runtime and done automatically. Devs will never do it consistently.

Sandybride overclocks higher than skylake but we've seen like 20% improvement in IPC. Not much at all, obviously. Seems like Kaby finally overclocks higher than sandy again.

my 2600k shits itself when I push it past 4ghz. Unlucky batch I guess.

Meanwhile I hear 7700k easily goes 5ghz+. That would be a significant improvement.

Is DDR4 just a meme too? I remember reading that memory speed didn't really mean jack shit.

Of course there's the added benefit of the new USB 3.1 gen 2 speeds, as well as the M.2 SSDs and that new shit Intel is cooking up called Optane.

We are currently entering the era of cores, the APIs know it, the manufacturers know it, the devs know it
Clock speed only became relevant recently because that "means" that zen will beat all of Intel so suddenly clock speed is all that matters
Once zen loses relevance the fight will be on for cores, I could see 6 or 8 cores becoming completely normal within 2-3 years. THEN zen will be relevant

is that comparing overclock to overclock or overclock to stock?

You'll see a 25%+ increase

2nd gen i7 is about equal to a 6th gen i5 in multi and an 4th gen i5 in single and quad

>DDR4 just a meme

DDR4 is only relevant because it allows higher density memory for the same cost, meaning we'll see 16-32gb ram become commonplace (unless they artificially keep prices high because consumers are stupid and will buy it anyway).

Overclock to overclock, sandy bridge overclocks better than skylake

I've been agonizing over whether or not I should upgrade my i7 950. I've got it overclocked to 4.2ghz but I feel like I could probably get better performance with a new CPU at the same speed.

according to ppl itt, you're looking at 25% performance increase minimum. Seems worth it to me, but I've got money burning a hole in my pocket.

>If CPUs have improved 10% each generation since 2600k
They haven't. Even if you include the fact that Kaby Lake will clock around 300 MHz higher than Sandy Bridge, the overall performance increase will be around 30-35%. Nowhere near enough to justify the massive cost of these chips.

Hopefully AMD will change that because you (a) won't have to buy a top-tier "K" chip for overclocking, (b) you won't have to buy a top-tier "Z" series motherboard for overclocking, and (c) AMD are always more competitive on price anyway.

>Hopefully AMD will change that

>Unfortunately most are too lazy and we get shit that throttles CPU0 and doesn't touch the other cores.

t. templeos

Actually you will need to buy a special Z board to overclock well, it's actually called z370
Also Lisa said they no longer want to be the budget option, they want to be Intels competitor in both quality and more or less price. She said "we don't want to do what Intel is doing" but she also said she doesn't want to be the low prices brand

GOD SUPPORTS MULTITHREADING

trips confirm

I say fuck you and ride on my first gen i7

No, 3 of the 4 AM4 chipsets support overclocking.

Their chips will be priced competitively. I'm not expecting the 8-core to be the same price as the i7-7700K, that's ridiculous, but I think the 6-cores will be competing at that level, and the 4-cores around i5 level. As long as their IPC is close they should be a much better performance per dollar option.

Not really true, the CPUs are unlocked, and the CPU can increase the boost tact if it is still within its power/temperature curve.

For usual overclocking you need one of the better chipsets (X370, B350, X300) instead of the basic A320/A300 chipsets, and the mainboard must support it.

The IPC is the same on both the 6700k and 6800k but two cores adds $500. This market is relatively successful. If AMD entered a market that is already buying $800 6-cores, with a direct competitor to the $800 6-core, they're not gonna let that opportunity pass them up

Technically the i7-6700K has 3-5% higher IPC compared to the i7-6800K but, yes, more or less the same. However, the i7-6800K is nowhere near $500 more than the i7-6700K. In the UK, it's not even £100 more.

There'll have to be a 6-core Ryzen chip at the same price point as the i7-6800K at the very least. Even if that's true, the AM4 platform will be a lot cheaper than the X99 one.

Whatever could be extracted, already was. Current cpus do what is a technological equivalent of black magic to use as much resources as possible. The same goes for compilers. However proper parallelisation cannot happen on such a low level unless you are willing to accept compile times in hours or days.

Moreover - prevailing model of programming is branchy, side-effect laden object oriented one. With functional programming without side effects you could expect some level of parallelisation.

Not necessarily. However this requires a change in the prevailing programming paradigm. Moreover - there are no 'brainless' methods of getting parallelism because 99% of devs are either under pressure to deliver, are under-educated or are just plain dumb.

Being buried under miles of legacy code does not help either.

However, we are slowly bumping onto IPC and clock walls, so threading and later numa-aware throughput processing is the way to go.

why upgrade when there is LITERALLY no practical benefit

sure if you're building a new PC right now get the newest one, but no point in upgrading when the gains are shit

This, pretty much.

Call me when intel releases a chip stock clocked at 8ghz

No because jews not in oven.

>Overclockin
>g

oy vey.
buy new cpu every year, goy.

If the CPU die was over 1.5 inches long, that would be bottlenecked by the speed of light

I disagree.

300,000,000 m/s
Cycling at 8,000,000,000 times a second, light/electricity could only go 1.5" per cycle

And that's free radiation, electricity is even slower running through a medium like silicon

So make it smaller.

But smaller dies plus higher clocks get way too hot way too fast, even for very cooled circuits. Plus the actual electromagnetic limitation we have, we just can't go much smaller without electrons having unstable and unpredictable paths
Quantum computing a great step forward, we don't need high clocks when we have trinary code

a stock 6700k is about as powerful as my 4.7ghz overclocked 3930k (which has two more cores)

6700k or 7700k would stomp the shit out of sandy vag no matter how far you overclock it

Not exactly stock...
tomshardware.com/news/Overclock-Intel-Celeron-TiN-CPU,9509.html

>But smaller dies plus higher clocks get way too hot way too fast, even for very cooled circuits. Plus the actual electromagnetic limitation we have, we just can't go much smaller without electrons having unstable and unpredictable paths
Just use superconductor junctions instead of transistors and we good to go, new ceiling should be >500ghz.

>he first heated the celeron CPU to over 200°
Baking your CPU for better overclocks confirmed

>500GHz
>half a millimeter wide CPU die, under perfect conditions
I'd love to see the heatspreader on that one

Well look until we get room temperature superconductors it would need to basically be housed in a tank of liquid helium so cooling wouldn't be a problem.
But also superconductors don't really give off much heat when you force electricity through them so it won't be that much of a problem anyway, which is what permits the high speed.

It would still be a half a millimeter CPU die, which would be nuts to try to manufacture, regardless of whether or not a system to run that stable could exist

Welp, as I said it would be the ceiling. Lots of ground to cover before that.

I wonder how big the 10Nm dies of Ice Lake will be
If cooling remains a shrinking issue maybe we'll see some serious clocks in 2018

IPC is king and always has been

clockspeed and moar coarz has always been an artificial way of improving IPC

Did someone say serious clocks?

Is that even Y2K17 compliant

what are those called?

No, I had to upgrade.

Nixie tubes.

Why is the color wrong in that image?

Government already has the next 20 generations of processor locked away in secret labs and in the Utah data center. Civilians won't see that technology until next century desu

That's because those are VFDs and not true nixie tubes like the other picture. Vacuum Fluorescent Displays work with... well fluorescents while nixie tubes work with neon gas. One glows greenish blue, the other orangish red.

>VFDs
>neon gas
Aren't these illegal in the US?

>IPC is king and always has been
>clockspeed and moar coarz has always been an artificial way of improving IPC
Actually the real main factor is effective computation per unit of time.

>pipe dream
>pipe
HAH I get jt

But it's not possible to increase IPC for some highly serial operations. It will always be 1 per clock at best.

>VFDs or neon gas being illegal

m8, I own firearms, do you really think they would ban gas tubes? Also these would be illegal as well.

Yeah but you cannot host OpenBSD


so long...

>aren't those illegal in the US?
Maybe you are thinking of the incandescent light bulbs that used to go in light fixtures and lamps.

>5% IPC gain on ivy
>5% IPC gain on haswell
>5% IPC gain at most on skylake
>1% IPC gain on kaby lake
>this adds up to 116.920125% total gain
so in other words, to match the IPC of the latest chips, you need to overclock your sandy bridge chip to the clock speed of the equivelent kaby lake chip + 16.920125% of the kaby lake chips clock speed

assuming im not braindead, that means a 3.40 GHz 2600k can match a 4.2 GHz 7700k by being overclocked to 4.911 GHz, a rather significant overclock, but not impossible. meanwhile the 7700k can at best overclock to 5 GHz, just a 200 MHz different from its maximum turbo boost, and a 800 MHz difference from its stock clock, in exchange for greater voltage wear on the silicon.

>Aren't these illegal in the US?
If so a lot of your advertising signs would be scrapped.

I really want to upgrade but I got realllllyyy lucky with a 2600k that I've had overclocked at 4.6 for years now. Hopefully Zen isn't shut.

>Just automatically make the programs parallel

So you have no idea how it works? Good to know.

>116.920125%
11.6920125%

the only reason to upgrade from 2600k is to 4790k which has overclocking and vt-d.

> 2600 vt-d
> 2600k overclocking
> ???
> sheckles!!!

Mmm I've got my 2500k overclocked at 4.4ghz
Honestly it goes alright. If it doesn't die soon I'd probably be more inclined to upgrade my graphics card.

good point, what i meant was to say that kaby lake has 116.920125% IPC of sandy bridge, or 16.920125% over sandy. that said, how did you get that number?

>numa-aware throughput processing
huh?

>upper boundaries of physics are starting to become the limiting factor
what a time to be alive

Electrons only move 1/3 the speed of light

generally or in silicon?

They don't even have that much increase, since most of the time they increased clocks, not the IPC.

Indium Antimonide.
Chalcogenide

Look that shit up /g
Intel is going to rock the world in the near future

Chips with laser beams and shit

AMD and Nvidia will be dead

Micron is in on this too

They've been milking us for years

The payoff is coming soon

The patents are up in 5 years.


You heard it here first

so morning of owl endorses AMD? sign me in!

my understanding is that was the rough IPC gain figures

what

electrons move a lot fucking slower than that (drift velocity), although they can convey changing electric fields at decently large fractions of c (velocity of propagation).

CPUs are already bottlenecked by propagation speeds. the few fractions of mm distance between an integer execution pipeline and the far edge of an adjacent L1D cache is already a hugely limiting factor in core designs. I.e., bigger caches take more clocks just to access the needed cells, which causes more clocks for read delays, which makes scheduling and mispredict handling harder, and so on...

Gains are not just for FPS, depending on what games you play a lot of CPU load is calculation for game aspects rather than just rendering the display.

I'm looking forward to a range of games to run better when I make my replacement/upgrade move in the spring.

After a certain point with "optimising for many cores" it becomes more practical to offload processes to a gpu.
The kind of work a CPU tends to get stuck with is the messier stuff that doesn't scale well for multicore which GPUs excel at.

Improved according to what criteria?

Because, according to the following source, not only the architecture performance across all generations in between increased only about 20%, it even regressed in many cases.

hardocp.com/article/2017/01/13/kaby_lake_7700k_vs_sandy_bridge_2600k_ipc_review

> If CPUs have improved 10% each generation since 2600k
[citation needed]

Both of these.

Electrons themselves travel on the order of a meter per second, or meters per seconds. It's slower than a normal walking pace, if I remember correctly. But the propagation happens at nearly light speed.

Imagine you have a hose filled with water; if you push a plunger in at one end, the water itself doesn't move very fast. But, the water at the other end starts moving almost instantaneously because the whole hose is filled. Basically the same thing with electrons in wires.

>CPUs are already bottlenecked by propagation speeds.
Not just propagation speeds. The signals moving around in a CPU have been, for quite a few years now, bound by transmission line physics. The frequencies of operation hit a point where the physical dimensions of the core amount to large fractions of the signal wavelengths, which is when signals start operating more like waves than currents and you have to deal with reflection and mismatched impedances.

t. unemployed EE grad who's never actually worked in the field

>#pragma omp parallel for
It's one line for basic shit and a couple more for anything else. Kill yourself please.

they don't really improve 10% they just jack up the stock clock and say it's 10% better leaving out that they speeds are stock

>Gains are not just for FPS, depending on what games you play a lot of CPU load is calculation for game aspects rather than just rendering the display.
This this this this this.

Graphics are easy to max out as long as you have a functioning CPU; just drop a 1080 in there and it'll run anything you want at whatever settings you pick.

It's when you get into complex situations that the CPU becomes a determining factor not of frame rate, but sim speed. In an FPS this doesn't ever become an issue; you have 8-32 people running around a map shooting at each other with hitscan weapons. In an RTS this becomes a biiiiiig issue.

Take Supreme Commander, for instance. A game released a decade ago. Sure you can max out the graphics and run it at 200fps with even a modest gaming computer today, but when you have a 4 player match and each guy is controlling an army of 400-1000 units, each of which is firing multiple projectiles per second, all of which are actually modeled and tracked in 3D space in real time, even a 6950X is going to have to slow down the speed of simulation.

If all you play is CS:GO, sure, spending more than about $100 on a CPU is retarded.

If you play just about any strategy games or do any kind of video/audio/photo editing (or stuff like Matlab) then your CPU is a major concern.

>hardocp.com/article/2017/01/13/kaby_lake_7700k_vs_sandy_bridge_2600k_ipc_review
>6 years later, and we only have ~20% better performance
Shit. Moore's Law really is dying.

However,
>it even regressed in many cases
The 7700K was faster in every single benchmark, I think you were looking at the ones showing the *time* it took to perform an operation; the 7700K had a smaller bar because it went faster. They should've shown those with a different bar graph to more clearly differentiate that smaller bar = faster.

>of a meter per second
Even less. According to the wikipedia page on drift velocity electrons in a 2mm diameter copper wire at 1amp move at about 2.3x10^-5m/s or 0.000023 m/s
You'd be lucky to get a meter in 12hrs.

Sumbitch, my knawledge is slipping. I didn't think they moved that slow.

Thanks for pointing this out. I was careless and got carried away by their increase/decrease statements in the comments following up to the graphs.

>Sumbitch, my knawledge is slipping. I didn't think they moved that slow.
If you think about it the amount of energy you'd need to accelerate all those electrons up to 1m/s would be pretty high.