7.2% clock increase gives it a 7.2% maximum performance increase

>7.2% clock increase gives it a 7.2% maximum performance increase
L M A O

2600k mastterrace

Other urls found in this thread:

tomshardware.com/reviews/intel-kaby-lake-core-i7-7700k-overclocking-performance-review,4836.html
youtu.be/4sx1kLGVAF0?t=24
tomshardware.com/news/intel-silicon-photonics-light-laser,32524.html
twitter.com/SFWRedditVideos

is kaby lake even better at performance per watt?
is intel finished?

price/performance is going to suck, if this is all Jewtel can bring to the table AMD is going to eat their lunch because nobody really needs i7's unless it's for pro use anymore.

Bring the cost down or I'll be using a 920 forever.

3770k > 2600k > everything else

>4.8Ghz @ 1.3V
Did I misread something? Was t his all they were able to push with such high voltages?

>the market reached such a status quo that intel competes with itself
>amd is nowhere to be seen
ayymd cucks on suicide watch

Skylake (and I assume Kaby Lake too) can tolerate higher voltages much better than haslel.
I have my 6700k at 1.35v

They could not push it much higher since the core reached 100c at times. Bad IHS paste strikes again.

Thats not my point. My point is that you get a .6 Ghz OC at horrible voltages (power consumption, heat).
I can push my 3770k past 4.6Ghz if I wanted to, giving me an OC of ~ 1.2GHz @ 1.3V (currently running 4.5Ghz at 1.208V because muh heat), which is godlike compared to what is presented in this article.

I know you cant effectively compare X generation OCs with Y generation OCs, but this is still REALLY disappointing.

>1.3v

1.3v isn't even the max safe voltage for skylake chips, why didn't they push this to 1.4-1.45v?

if its hitting 4.8ghz at 1.3v then 5+ ghz should be possible

My 4690k @1.27v is sitting @4.7Ghz. I'd have to check on my old 2500k rigs and see what they're set at. Glad I was able to enjoy my 2500k and this 4690k for so long. No point in upgrading.

tfw my stock 8350 is set at 1.3v

TFW my 4.3Ghz 8320 is at 1.32V

4790k here, no point in upgrading. maybe in 3 years

Sorry, my 5820k doesn't need 1.39v to achieve a 1.2GHz (27%) OC. At work, but my PC at home is at 4.5 GHz @1.32v and rock solid. Had a stable OC of 4.7@ 1.35 but don't like pushing my voltage to that point on a regular basis and the 4.5 is just fine for my usual tasks.

37%*

>tfw you're 4790k is so slow now

send it to me. my desktop is a pentium 4, and my laptop is an atom

>0-4% increase above a 6700k

>tomshardware.com/reviews/intel-kaby-lake-core-i7-7700k-overclocking-performance-review,4836.html
Think i'll stick with my skylake i7, it seems to be doing just fine.

>P4 and Atom in the same post.
I'm so sorry.

Not bad, but I've no reason to upgrade from my current CPU when it's a 6700K.

>the market reached such a status quo that intel competes with itself
underrated post

What gets me is my new laptops i3 is basically same specs as my old laptop's i5. I mostly flip it around and use it as a tablet for my e-books.

After reading all that, I feel pretty good about dumping my cash on a 6700K. Not missing much I suppose.

>tomshardware.com/reviews/intel-kaby-lake-core-i7-7700k-overclocking-performance-review,4836.html
>6/8 Cores aren't even mainstream yet

Why do I care?

I see no reason to upgrade from my 4690k which I haven't even bothered to OC yet

Wrong. Comsumershit no es bueno.

Same here I don't see a reason to upgrade from mine.

I considered a x99 board but at the end it would be money thrown away.
I can just buy better monitors or other crap instead.

So all that article states is that he could push the Kaby Lake to 4.8Ghz whereas Sky Lake stands at 4.6Ghz at the same voltage. Seems like hardly any improvement.

>

>I know you cant effectively compare X generation OCs with Y generation OCs, but I'm going to anyway because I'm a massive cocksucking faggot and I don't know what they did with the internal voltage regulators the last two generations

3770k is based as fuck

My dude. 3770k has done me well.

Ballpark estimate, how much is the core i7-7700k expected to cost?

Three fiddy

Three fiddy

>kaby lake improves by like 2%
>meanwhile zen is 40% better than its predecessor
intel are you even trying??

$450

I think they got a bunch of fanboys told them it got 20% gains in performance, and scraped whatever was in their underwear out to use as paste.

because shit was almost on fire at the settings they had it at.

I have had my 4770k delidded at 1.45v since 2013

fucking this, six years and it's still going

It hasn't already become clear to you that the max attainable clock changes little from one CPU arch to the next, starting from Sandy Bridge? They're all within a few hundred MHz of each other. The only reason you're getting 1.2GHz out of a 3770K is because it starts lower to begin with, not because it can hit some amazingly high clock. Also the 3770k goes turbos up to 3.9 by default, by 1.2GHz OC do you mean you can run it at 5.1? That would actually be quite amazing. I assume you're actually counting from its base clock of 3.5, in which case it's pretty much what you can expect from any Intel CPU since Sandy.

2600k is probably one of the best buys in PC history. If you bought it in 2011 we are approaching nearly 6 years later and still relevant.

I've been wanting to upgrade for years but Intel can't entice me. I'm not willing to go the the X platform and pay for the 8-10 cores I want.

Damn, I hope Zen delivers the goods next year. I'll jump ship as fast as I can.

>Any Year
>Using Inferno Bridge

Fuck, Intel needs to be taken to task.
The only company pushing single-thread performance anymore is IBM, and Power 8 is priced so jewishly they make Intel look like saints.

You consumer plebs don't understand whats happening. You think intel could just create an arch with 50% higher IPC, but they're holding it back from the market for lack of competition. This isn't happening because they can't do it. Serial integer performance isn't making substantial gains because diminishing returns is a law. X86 has been refined for decades, more so our approach to conventional logic has been refined, constantly moving in one direction. We see the exact same encroaching limits regardless of ISA.

It isn't a matter of increasing execution resources. We have had 3 ALU wide cores where that 3rd ALU is so under utilized by a single thread that it can support another logical one. There is a fixed limit to how fast we can feed instructions through a core. Every 1% improvement here costs more money than the last 1%, and takes more and more man hours to achieve.
Even FPU performance progression is slowing.

Give it a few more years and ARM will be in the exact same place, and they're quite up front about it. They know they're going to have to pursue higher clocks inside of low power envelopes because they won't be increasing performance per clock dramatically forever.
IBM's POWER9 made big strides in perf/clock in comparison to POWER8, but POWER8 was nothing to write home about. In perf/clock on the FPU front they were decidedly behind intel. In certain branchy integer workloads POWER8 is actually slower per clock than AMD's Piledriver.
POWER9 chips are also topping out at just 4ghz instead of 5ghz like their predecessors.

No one in any facet of the industry has a viable solution, aside from maybe SoftMachines with their VISC arch. They only prove of their arch is a low clocked test chips with projections of how it would perform if they could clock it higher and maintain coherency. Intel recently acquired them, and not for very much either. The odds of it panning out don't look great.

SoftMachines/VISC was always a scam.

You just can't magically clock up an architecture as if pipeline stages don't have very real latency constraints from inter-module communication. Non-Lego block architectures already have to deal with shit like register files and L1D blocks being to many fractions of a mm across and taking too big a slice of a clock cycle to access the far sides of from the integer execution pipelines.

Athlon 64 X2 4800+ > everytging else

that's cool

then I remembered the CPU alone will cost like $400 CAD and I just don't give a fuck because my entire i5 3570k build was $1000

>3770k
>aka the CPU that game delidding a reason to exist
No. I'll take a 2600K or a 2700K superior soldering any day over the shitty TIM

i7 3820 would have been an even better purchase

those are sandybridge based with quad channel ddr3, more pci-e lanes, more cache, still hit 5ghz. They were about $350, you just had to pay $100 more for the mobo.

3770k was the beginning of everything wrong. Hardly any IPC performance increase and using shitty TIM instead of soldering. It OC's far lower than a 2600k unless you have a custom loop or delid. Educate yourself retards.

youtu.be/4sx1kLGVAF0?t=24

Not really, since it's a locked core multiplier chip and a quad core.

bclk was easily overclockable on LGA2011, multiplier was unlocked downward and many mobos had a 125mhz bclk option that kept everything else in spec like the 100mhz bclk option, then you could adjust up and down from there. Seemingly everyone got those to 4.7-5.0ghz.

Makes me so happy I still have my 2600k. Gonna build a NAS around it soon

Scaby Lake is looking really disappointing. I'm thinking the best option is to just wait for the new server chipset coming out and get one of the newer skylake xeons. Yeah it will be pricey as hell but the new chipset is looking absolutely beastly, while the desktop stuff is completely stagnant.

E5- 1650 here
Not going to upgrade til 2025 if shit keeps going like this.

My nigger. Is there anything that 2600k can't handle?

>4670k
>cant OC for shit

fuck you intel

used to run mine at 4.8 on a corsair all-in-one cooler for years

I got a 4790k after my 2600k, shit was disappointing

I thought the meme was 2500k?

Wish I never gave that puppy away.

>MFW my FX-8320 stock voltage is 1.4

Haswell stronk

the viable solutions are simple,

1) refine the process
2) reach higher clocks with said process
3) multithread the code, granted it cant multithread forever, but if you can split the workload up you can make gains there too
4) potentially, reverse hyperthread, there are patents for various ways to do it but not sure if anything ever panned out.
5) move passed silicon.

the issue is not a single one of these options is simple in execution.

Pushing clocks through any means has literally nothing to do with the underlying issue. "Reverse hyperthreading" is the VISC architecture described.

I want arm64 be desktop ready.

amd has their own patents for reverse too.

And as I see it the issue is stagnant speeds, refining the processes so they can truly be called 14nm would net gains all around, and finding a way to make the chips so they can clock significant amounts higher would also net gains all around. for quite a while, intel has had the same peak ocs, the only difference is they are making chips that hit a higher minimum oc.

> so they can truly be called 14nm
This idiocy needs to stop being regurgitated. The public facing marketing name for a process node is based on one FEOL feature or another barring typical ASML nomenclature. The BEOL has never been the deciding factor in what a process node is called.

The underlying issue is that performance per clock is not dramatically increasing. We've been following and refining a single paradigm for decades. In doing so we've picked all the low hanging fruit off the performance tree. Every subsequent gain is more costly, and we're getting to the point where they're not even remotely economically viable. Pushing clocks higher doesn't solve this issue, and once we hit the finite clock limit, which is a very real thing, we're back in the exact same position. We need higher performance per clock, and long term that means we have to abandon Von Neumann architecture.

>unnoticeable real performance improvement just shitty sightly higher benchmark numbers

What is the fucking point?

Seriously, this fags are years away from giving me a reason to drop my 3570k @ 5ghz

Holy fucking Jews!

This is only due to AMD being shit.
If Zen cannot compete, then the consumers will suffer, no matter which side.

Intel I am sure has the technology ready to give us more than 50% performance jump, but hold it back and milk out the consumers instead with 5-10%max increments each CPU update.

Incredible

Fucking THIS

I've had literally no reason to upgrade at all, nor do i for the foreseeable future

>Not delidding, applying ic diamond or liquid metal inside and over clocking the fuck out of it

>Paying for Hyperthreading

3570k > 2500k > 3770k...

>5) move passed silicon.

Give it 6 years and we'll have something else emerging in the CPU field.
I'm about 100% certain that we're going to see photonic computing happening next.
Year ago they already had a dual core photonic chip functioning and they've been pumping tons of money into this for ages.
And unlike Graphene, this photonic stuff is already in production and actually a viable alternative for what we currently have.
Hell, Intel is already producing silicon photonics products.
tomshardware.com/news/intel-silicon-photonics-light-laser,32524.html