ITS LIKE A

ITS LIKE A

WHIRLWIND INSIDE OF MY HEAD

Other urls found in this thread:

tomshardware.com/reviews/intel-kaby-lake-core-i7-7700k-overclocking-performance-review,4836.html
anandtech.com/bench/CPU/1027
en.wikipedia.org/wiki/Von_Neumann_architecture
physics.aps.org/articles/v8/91
twitter.com/NSFWRedditGif

Who cares about that? What's the overclocking potential on the new chips? Which idiot bases performance of the cpu of different architectures at the same clockspeed as real world performance?

DELET THIS

tomshardware.com/reviews/intel-kaby-lake-core-i7-7700k-overclocking-performance-review,4836.html

BARELY ANY MORE PERFORMANCE
HIGHER POWER DRAW
HOTTER

>HIGHER POWER DRAW
what the fuck is intel doing
>200 MHz higher clock rate
>4.5% difference maximum
oh

Less than 200w for the system is pretty good at 4.8ghz though, under torture.

In tests with the GPU basically being at idle.
What is shows is that there is virtually no difference in performance from the process as intel claimed. Clock Skylake and Kaby Lake to the same levels and they draw the same power, and have virtually exactly the same performance.
The only notable difference in Kaby is the video decoder, and that only matters for a laptop, and only if you're going to be watching hours of 4k video. Because there are obviously shitloads of 4K laptops in the mainstream instead of just a few niche systems.
Its pointless all around.

If you have a Sandy Bridge or Haswell system you won't need to upgrade for another 5 years at this rate.

>The only notable difference in Kaby is the video decoder, and that only matters for a laptop, and only if you're going to be watching hours of 4k video. Because there are obviously shitloads of 4K laptops in the mainstream instead of just a few niche systems.
>Its pointless all around.
>If you have a Sandy Bridge or Haswell system you won't need to upgrade for another 5 years at this rate.

INTEL BTFO

They're the same architecture and size, Kaby lake just has extra steps to accomplish the same thing

>Intel beats Intel
>Intel is a better option than Intel
>Intel is ultimately a better deal than Intel
Same story for the last 10 years

what a useless cpu gen.

our only hope for progress is AMD now.......fuck.

Clock per clock Zen will probably be around Sandy Bridge in most things, fairly a bit better in others. Zen+ will probably bring another 10-15% IPC uplift coupled with higher clocks from IBM's process they designed for their own POWER arch.
After that AMD will be in the same position as intel, and no one will be making significant gains in perf/clock.

We utter stagnation now.

So 7600K is 100% better than AMD Zen? Interesting.

>We utter stagnation now.
Pretty much.

Considering how the performance increases are slowing down, any even relatively modern CPU is going to be good until they invent a whole new way of making these, which will without a doubt happen during the next 6-8 years considering how hard things are stagnating.

I think that everyone should flat out stop paying attention to the CPU news, until we see a headline "A new material used in CPU manufacturing" or "Processors now shipping with photonic components" or something.
Only then are we going to see any meaningful performance increases.

>BARELY ANY MORE PERFORMANCE
>HIGHER POWER DRAW
>HOTTER

Prescott 2.0?

okay but seriously, theres no point in all to kaby lake? its exactly the same thing? so i should just upgrade now? what a disappointment mane

Moore's law is kill, both for CPU performanc and for RAM - 4GB systems were widely available 7 years ago and it's still enough to get by if you don't run anything that is specifically memory-hungry.

Where are the crazy and wonderful times of late 90s/early 00s where half a year later you had a significantly more powerful CPU.

You still have a significantly more powerful cpu compared to then. 1% of current cpu performance would still annihilate any 90s processors in raw performance.

Its going to take some batshit crazy revolutionary IP to change things. Some radically new architecture, like a non Von Neumann arch, sort of like what IBM is developing with DARPA. The only hitch is that consumer and enterprise markets are both largely tied to X86 software, so any new arch will need to emulate X86 without sacrificing throughput.

People taking Moore's law wildly out of context is all that is kill.
The idea was that it would be economically viable to double the transistors in an IC every two years. This is still holding more or less in GPUs. It stopped being relevant to CPUs when we stopped making single cores radically more complex because we hit an enormous point of diminishing returns.

RAM isn't even remotely relevant here.

CORES

NICE AND HOT

Actually you should have upgraded on black friday because intel shit never goes on sale and the market will be swept clean of skylake processors immediately leaving only the "new" kabylake shit.
That or zen, which will not compete directly with skylake ipc.

;_;

The only purpose is HEVC decoding, isn't it ?

...

>4GB systems were widely available 7 years ago and it's still enough to get by if you don't run anything that is specifically memory-hungry.
Like an operating system?

What OS do you run?
Windows 8 uses only 600MB on my laptop...

Ill never understand this... has Ipc mattered sense sandybridge?
At least as far as I can tell, that's where good enough became a thing, anymore single core being good enough is just for legacy things and shit programed ones.

multicore matters far more

On that note, gaming is really the only application where a strong single core matters, but realistically, sandybridge pushes anything passed 100fps oc sandybridge can get almost everything to 120 and most to 144 and at that point it becomes a gpu issue.

Am I really the only one that sees this or what?

can't wait for everyone and their mother to recommend kabylake over skylake

new decoding sounds neat though, but i'm not going to use netflix specialy through edge

that seems like the only reason to get it, desu

if it drops the price of skylake chips though then I'm all for it

When in history have you ever seen intel lower their prices on old stock?
Intel gives retailers a small window to sell stock of last generation parts, and then they have them pulled off the shelves. Give it a few months after Kaby Lake SKUs are on sale, you won't be able to find Sky Lake anywhere. They have their little preferred retailer list. If you want to get on it and stay on it you push their newest shit when and how they tell you to.

That's a big bird.

The only good thing about kaby lake is the unlocked K series i3.
Expecting more than 5% performance boost from a new intel gen is just dumb at this point. It hasn't happened the last 5 times and it won't happen now.

You still do not understand. Consumer general processing power is stagnant not because of technical issues, but because we dont 'need' it anymore. All our shit is moving to the cloud, where the informational power dynamic will shift drastically. Its been decided that weve had our fill of general purpose gflops, and no one really even cares. Facebook works. 4k netflix is handled by hardware decode blocks. No one cares.

Were not witnessing the beginning of the end of public general purpose computing, that started in earnest around the time of sandy vag.......were now more than half a decade into it and judging by this thread its starting to noticeably [for the likes of us anyway] sting. Time will do the rest. We will never have an era of technology serving primarily people like the previous few decades again.

I broke my 'no posting on Sup Forums due to capcha which furthers googles machine vision ambitions' rule to post this, so i hope you appreciate it.

>It hasn't happened the last 5 times and it won't happen now.
Happened with Haswell.
anandtech.com/bench/CPU/1027

I think the standard 7700k/7600k overclock will be 5ghz making is a really significant upgrade over 2500k. In probably gonna sell my 6600k and buy a kaby lake chip. Should only cost me like 40 or 50 bucks.

What an uninformed laughable shitheap of a post.

Serial integer performance matter of factly is stagnating because we can't design a front end to feed instructions to execution units radically faster. Making a core wider doesn't add performance when the additional ALUs are doing nothing. Even before Sandy Bridge a 3 wide integer core was under utilizing that 3rd ALU enough that it could support an additional thread via SMT. Every little bit more performance per clock cycle is costly to achieve. We've been developing the X86 arch for decades, billions upon billions of man hours are invested here, and diminishing returns is a law that governs all.

It has nothing to do with "the cloud."
You're an uninformed pleb. Your post was nothing but guesswork you pulled from your ass.
If you understood anything at all you'd realize the dire situation we're in. We're not increasing perf/clock, and perf/watt advances are coming almost entirely from process instead of architectural efficiency. If perf/watt and ultimate performance don't continue to make enormous strides, we are on track for Exascale computing to consume 99% of all power produced in the developed world. Every body in the semiconductor industry is working their asses off to tackle these issues, no one is seemingly allowing stagnation to take place because they want to. These are real engineering challenges that take hundreds of people years of tireless work to overcome.

Basically go fucking kill yourself, you tech illiterate shitter.

NO NO NO

DELET

You seem knowledgeable, so here is a question from a noob.

Why don't we design hardware blocks for math operations.
For example, integer addition takes what? 1 cycle? Float multiplication 5 cycles? Square root 25 cycles?

Why isn't the squareroot, Euler, pi, sin and cos and so forth implemented as a "hardware block" so that it only takes 1 cycle.

I coded a simple molecular dynamics program. And one thing all the established programs did, was to only calculate with squared numbers but never do the square root. Or they didn't use the exponential function but instead used polynomials to approx.
And all because it's computationally cheaper

I'm still going to buy it. Should be a nice upgrade over my 4690k

Looking back at the history of IC design explains a lot if you look at progression generation to generation. CPU cores used to be very small very general things. They've become larger, more complex, with more support for varied instructions through an additive evolutionary process. If we don't have a specific hardware function for something we'll spend a few more cycles processing it. It usually stays that way until demand is high enough for those specific functions, and then we get ASICs that handle them. A prime example of this in recent years is video encoders and decoders being baked on die.

Basically if a certain function or set of function aren't implemented in hardware its because there either isn't enough demand yet, or the performance speed up isn't worth the added die area and complexity they would incur.
We could have the entire operating system in hardware, it would be faster than anything else around, its just that no one considers it worth the effort.

>Why isn't the squareroot, Euler, pi, sin and cos and so forth implemented as a "hardware block" so that it only takes 1 cycle.
They already are implemented in hardware (to varying degrees) but that sure as fuck doesn't mean they take one cycle. The time an operation takes depends on the number of gates the signal has to pass through, the speed the gates can switch at and the length of wire between them (due to speed of light limitation.) You can't build arbitrarily complex operations and have them complete in a short time. Even operations like multiplication that are obviously implemented in hardware take many cycles and have to be pipelined to make it not suck ass. Also consider that die space is limited and you can't just add hardware for every stupid thing some asshole wants to do 2% of the time.

The subtext [you could barely call it that, more like an obvious conclusion] was that wed already have whatever improvements you describe if the 'market demanded it', which i explained how and why its apparently not.

Autists dont grok subtext and discussion flow though, i know, preferring instead performative expressions of subject minutiae. I forget where i am sometimes.

>You can't build arbitrarily complex operations and have them complete in a short time
Can't you just put a lookup table into the hardware? After all there are only finitely many floating point numbers.

Making another uninformed shitpost in defense of your uninformed shitpost won't change anything.
Matter of factly serial integer performance has slowed because of technical issues. Saying anything else is violently ignorant. You're talking out of your ass about a topic you know utterly nothing about.

Diminishing returns is a law. Repeat this statement aloud until it is forever engraved in your inferior and malformed autismal mind.

t. Intel employee

>moores law is kill for RAM
What are you even talking about. We now have 128GB dimms and that's still at 20nm.

lol no it wont

You can do that in software or do you mean LUTs like in FPGAs?

Supposedly this thing beaten amd by 70% more?

what the fuck

you have no idea what you are talking about

Intel just went full AMD
Never go full amd, nigga
Will it have 220w tdp too?

Not that retarded user, but I think he has stumbled on a point. While I don't doubt there are real technical issues at work behind the scenes, I also don't think the average consumer "needs" any more CPU power than what they currently already have at their fingertips with modern PCs/smartphones.

I know you could argue that that is kind of a myopic view (imagine someone saying the same thing 25 years ago) but we seriously have kind of plateaued from a technical standpoint right at a point where the average user already seems to have more than enough tools at their disposal to do whatever it is they need to do in their everyday lives with whatever computer they personally have available. My point is that as demand for more performance from average people decreases, so will R&D money aimed at solving the current technical limitations of silicon.

>so will R&D money aimed at solving the current technical limitations of silicon

There's no solving it, even Intel has admitted after 7nm they have to switch to something completely different.

Hello FX9590

>There's no solving it

Yeah I get that, re: silicon. I just meant solving the problem in general, namely, how do they come up with faster/better stuff to sell to us when we already own everything we would ever need. Whether it's silicon or something else altogether, the point is if there isn't a demand to have that power in the first place, then there likely won't ever be a solution because whatever that solution is will cost a fortune to develop, if it can be developed at all.

So this is the power of incremental improvement...

Take away professional workloads, and modern high end gaming, and we've been at "good enough" computing since the Athlon 64 and C2D era. R&D into high performance computing hasn't fallen off, it increases every year. The issue is a purely architectural one and has nothing to do with substrate material.
Engineers struggle for every additional 1% IPC uplift. Every year it costs more money, takes more man hours, and there are less ideas going around to pursue performance in line with this current paradigm.

Thanks for the explanation, that makes some more sense to a pleb like me. So basically what you're saying is that we're up against the limits of x86 architecture? I was always led to believe that played a role too but that was obviously just my misunderstanding.

enlighten me then, because i can see the benchmarks, there is almost no game that cant be pushed to max refresh rate on a sandybridge i5/i7 and if it cant be a mild overclock does

tell me what other thing besides a shittly programed game doesnt multi thread effectively enough it has to default to single core performance alone.

>intels latest cpu is confirmed a complete blunder
>intel fans on suicide watch
>zen around the corner
>45W tdp zen beats 6700k
>intel fans starts killing themselves
>intel realises indians who poo in the street beat them
>instead of being further humiliated intel drops out of the cpu market
>Amd fans rejoice around the world as i7 level performance have dropped to a reasonable $150
>remaining intel fans either convert or die with intels legacy
>and so the war ended and the pc userbase was united as one

Its not limited to X86.
Everything from the branch unit to the scheduler is what does the real important work. The execution units only crunch the numbers they're handed. The big hold up is how fast we can get instructions to the execution units. Designing a scheduler capable of issuing a substantially higher number of instructions itself is rapidly approaching damn near impossible, and we're still ultimately limited by the underlying principles behind every commercially successful arch around.

en.wikipedia.org/wiki/Von_Neumann_architecture

We will eventually need to transition to non Von Neumann architectures. X86, ARM, MIPS, POWER, given sufficient time they will all converge on the same point in performance.

Our only hope is, was, and would be Graphene.
All the problems revolving Graphene are already resolved, the only thing remaining being the fabrication process.
Except because AMD is down the gutter, Intel has no reason to invest that massive money into the fabrication and introduction, and is better off milking retards and idiots with single-digit performance increases bellow the 6th percentile, and core jerking, since idiots buy that shit anyway and willingly overpay for trivial performance increases.

If Graphene came though, we'd have 100GHz clock rates per core, and less cores always equal more stability and less code faggotry for adaption to core counts.

Stop regurgitating reddit science

When you stop shitposting and start writing something with intelligence, which you don't have otherwise your post would have some substance to it,
that's when i'll stop regurgitating facts.

J E W E D

Nothing you posted was factual, you regurgitated some ignorant reddit tier nonsense.
A functional bandgap with graphene is not a solved issue. Growing, and it is a process of growing, switching logic off of a graphene substrate is not a solved issue. Doping graphene is a magnitude more difficult than silicon, and resolution has to be 1:1 atomic scale or an entire gate and likely everything around it is broken.

You're blindly regurgitating things you know nothing about. Even pretending like you have an IQ over 70 in itself is a joke.

dude, we see intel cpus, we see their server chips and we have workstation boards to use them in. the stagnation is real, but the issue we face now is a power one, at least from higher end computing where the cost of a server rack is a write off but the power consumption cost is real. everything is moving toward more efficiently using threads, because let's be honest here, die space is a premium, we are going lower power upgrades, at least in consumer computers for two reasons.

1) lack of competition. This affords intel to serve pro uses/needs with power efficiency with offloading some of their work to consumers.
2) the pro workloads are HEAVILY threaded and ram speed is an issue when the cpu is more than fast enough. so instead of single core throughput they focus on efficiency,

some aspects went to the cloud, like storage, but for the most part, general purpouse computing isn't, only on a renderfarm renting basis has it moved.

part of what dumb dumb said is right, the reason the focus has been on power and not speed is because for the average person, everything is fucking good enough, and for the people who use laptops exclusively, they get a fairly massive boost in performance from more efficient cpus.

fact is, people who want speed are the overwhelming minority, as everyone else wants efficiency.

7nm is now 10nm unless something forces them to go to 7nm, it may just be more efficient to go to another process than try to get the quantum tunneling issue sorted out.

>Summit Ridge is proven to only be twice the performance of the FX-8100, not the FX-8370
>that means SR7 is going to be weaker than the i7-6850K at best
>Kaby Lake is literally Skylake Part 2: The Jewlectric Bugapooinloo
>somehow it's worse than the jump from Haswell to Devil's Canyon
>Cannon Lake is getting delay AGAIN
Just kill me

im all for a good roast but you literally ripped him a new asshole

gonna take a while for that redditor to recover

>bandgap
Your knowledge is over a year old.
physics.aps.org/articles/v8/91
The bandgap already has a solution, all it needs is a financial dump like for instance from some CPU production company with loads of money.

You want to talk IQ, talk when your IQ increases with the times and isn't stuck in the stone age.

Way to prove you literally are mindlessly regurgitating things you read on reddit. Presence of a bandgap does not mean its a suitable bandgap to be exploited in a device, nor does it mean the material is even remotely suitable or stable enough to serve in a semiconductor. Hilariously you chose to demonstrate a layer of graphene bound to an Si based substrate. If you can't see the irony in preaching the end of silicon while your meme material is only an additional layer on top of it, there are no words to help you. Its akin to me saying an SOI wafer isn't silicon because theres an oxide layer on top of it.
If you could do anything expect copypaste what you read from other tech illiterate retards you'd mention any of the other meme 2D materials, or anything else really. Silicine, molybdenum disulfide, tungsten disulfide, things actually being pursued in the industry.

But hey, feel free to keep parroting your upvoat crew.

not the op but most part of real time signal processing like audio DSP can not be multithreaded properly by nature. if you care only about your gaymen then i guess your sandy bridge will be fine. dont be retarded

I have a 6700K. Seems like I will be prolonging my next CPU upgrade indefinitely at this rate.

BTFO by the devil himself. Satan knows his microchip technology.

>That 100 degree 7700k overclock.

I'VE GOT ANOTHER CONFESSION TO MAKE

thats why it gets its own thread in a game dingus

COMPETITION

NON-EXISTENT AND STAGNATED

still faster than anythhing from ayymd kek

High performance computing will still be in demand by governments and AI researchers (Hi Google)

Yeah I suppose you are right about that.

...

>Clock per clock Zen will probably be around Sandy Bridge in most things, fairly a bit better in others
Clock per clock Zen is already better than Broadwell-E.

What will probably happen is that the clocks it will reach won't be that high, meaning will won't beat something like kaby lake in single core perf.

>Clock per clock Zen is already better than Broadwell-E.
Honestly that is pretty impressive given the fact that AMD's R&D is basically meme tier.