Is this true? This is what they came up with after a year of research?

Is this true? This is what they came up with after a year of research?

Other urls found in this thread:

semiaccurate.com/forums/showthread.php?p=278368#post278368
en.wikipedia.org/wiki/Embarrassingly_parallel
twitter.com/SFWRedditGifs

Sounds like a success to me. 1% might get you the sweet spot you've been wishing for when rendering your code. Compiling your textures will be faster as well.

I don't know how Intel continues to revolutionise the market each year.

bullshit

If it's cheaper than the Intel chip then it's a win

CPU's have literally not been improving for the last 5 years

At the same frequency. The thing is 7600K can get to a higher frequency with the same TDP as 6600K.

Good then my Haswell has at least 3 more years of life left

Is there any reason to even upgrade yet?

t. 2500k user

no

If only that graph displayed power consumption...

>each generation gpu cards bring 30-50% performance improvement
>each generation cpu bring 0-5% improvement
Wtf happend? I know people like to shittalk nvidia, but goddamn these fuckers are bringing heat on amd, while intel is literally doing nothing.

Does cpu's in general need major revision and whole architecture remake?

world would stop working
everything is made for x86 except phones

>3 more years
More like 10 more years, unless we see the lowering of the tier prices.

Is this bait?

>Does cpu's in general need major revision and whole architecture remake?
lets see....
CPU - requires complex architecture changes to get even a little bit of extra performance
GPU - lol just put in moar coars x---DDDdd

That's what we got for AMD being so incompetent and holding the market back

Meanwhile, Apple's Hurricane is already 25% faster than Skylake/Kaby Lake clock per clock.

Bullshit. Give us source.

semiaccurate.com/forums/showthread.php?p=278368#post278368
Both clocked at 2.3 GHz

How long till Apple replaces their Intel Macbooks with ARM ones?

so basically the way CPU works is flawed and needs major revision.
I know both amd and nvidia gpu's works on similar core principal and like you mention more cores seems to scale really well, yet for cpu's, anything over 4 cores seems to do nothing for most games.

Not to mention the fact, video cards also gets huge boost from faster video ram (gddr3 vs gddr5 vs gddr5x vs hbm, there seems to be pretty big difference in performance for each ram type), unlike cpus (ddr3 vs ddr4 cpus, doesnt seem to have much impact at all)

They already announced they won't be working on speed increases, it's all about power efficiency and integrated graphics improvements now.

Is this from geekbench? They are known for reporting unrealistic scores

i wouldnt be surprized if its true, desktop cpu's have gained like 20% ipc in last 5years, compared to huge gains by mobile arm cpu's.

You didn't get the memo did you? Sandy vagina is all you ever need unless its an AMD chip in which case anything short of kabylake is too slow.

>so basically the way CPU works is flawed and needs major revision.
not really
we could get a boost by abandoning x86, but that might result in one time 20% gain.
Maybe another 20-30% boost if we can get VISC to work. But beyond that we'll be back to max 5% per generation.
The real problem isn't with CPUs though, it's with programmers.
When retards can't even use multiple threads, obviously they can't use up the full potential of CPUs.
I mean just take a look at stackoverflow where people ask for performance advice and most upvoted replies are usually "don't pay attention to performance, compiler will optimize it, code readability is more important"
I have no idea where retards are getting these ideas, but it gives a pretty clear picture on why a lot of programs perform like shit.

t:poorfag

So why exactly does gpus scale so well with more cores? Do gpus have onboard controllers which directs works to unused cores?

Its price/performance. Sandy to Kaby/skylake isn't worth it if its too expensive.

Zen might reduce the price parity dramatically while keeping the performance up.

The tl;dr is the sort of shit a gpu does can be spread across as many cores as you have - there really is no inherent downside (again, in simple terms) from adding more cores to a gpu - it will inherently get faster.

CPUs do a lot of work where B cannot start computing until A is done, thus no matter how wide you go you are still limited by the speed at which A can be computed.

See also: why any boinc project that supports gpus is orders of magnitide faster than when using the cpu.

GPUs do tasks which are way better suited for parallel computation, very unlike CPUs.

Newer chipsets with support for fancy new toys (m.2 drives, PCI-E 3.0) new instructions that make your CPU fast in *SOME* cases.


but honestly if you're just a casual user, watching videos, browsing Sup Forums and playing CSGO, overbang, generic popular game, no reason to upgrade.

I don't need 16 threads to shitpost but damnit i'm going to have them. That and for vidya (again, lol 16 threads for vidya).

gpus work differently
they're fed per vertex/per pixel/per triangle etc
usually you have millions of these so it's automatic scaling essentially.
for example in vertex shader: number of vertex = number of threads, so there's automatic scaling
on the other hand CPU workload is usually a lot more sequential, so you need to properly adapt algorithms to more threads, but for games it's usually just programmers being lazy.
For example you have physics or npc AI being processed, which just work in big for loops and programmers a lot of times are just too lazy/stupid to split them into different threads.

honestly the only game I've seen benifit from more than 4cores is the witcher 3 (Overclocking my i5 gave it a huge boost in novigrad and my 5930k is a fucking boss)


but the m.2 drives and PCI-E lanes are very nice, this will likely be my CPU for thenext 5-6 years.

M.2 does interest me - I need to do more research but i'm thinking a M.2 SSD so I can can a million sata HDDs (I have about 8 HDDs) would be nice.

>so I can can a million sata HDDs

Would you like to rephrase that?

Power consumption is the same as well

Kaby Lake is just Sky Lake with more features. The core is the exact same.

>The real problem isn't with CPUs though, it's with programmers.
And sequential data dependencies

>honestly the only game I've seen benifit from more than 4cores is the witcher 3 (Overclocking my i5 gave it a huge boost in novigrad and my 5930k is a fucking boss)
Would TW3 benefit from 32 cores?

Teh real reason why games don't scale to multiple cores well is because DirectX 11 and OpenGL are shit.

DX12 / Vulkan will change that, by allowing you to offload command buffer generation to as many cores as you want.

Nigga it's a Skylake refresh with better hardware decoders and hopefully unjewed thermals. Disgusting, I hate the stagnation we're going through. Breakthroughs fucking where? When are we leaving silicon?

So I can use a million HDDs. I've salvaged a load from old work PCs for storage for my delicious piracy.

DX12 doesn't scale more than 6 cores so far.

Next decade will be about refining silicon, after that a stacked process... 2050.

No its because the core's ability to communicate with each other and the cache system is reduced for each number of additional cores.

Couple of workarounds has been proposed to avoid this pitfall and intel has recently (this year I think) bought one of those companies thats working on this new core design system.

>bought one of those companies thats working on this new core design system.

Just to gut them I bet.

Depends on your design. DX12 imposes no limit. If you can compute all your command buffers in advance, you can in principle do it on however many cores you want.

The problem is that it's a complete paradigm shift from how you used to develop games, so developers have to re-learn everything they used to know about writing engines. Command buffer pregeneration and aggressive reuse is a fundamentally new thing.

Driver overhead basically goes down to 0 if done properly, so your game will be purely limited by the GPU, not the CPU.

No, TW3 will likely run fine on a hyper threaded quade core, I think anything more than 6 cores will be pointless.


using an M.2 drive will disable a few sata ports on your mobo FYI, but if you have a nigger load of drives, get a few raid cards, its the only correct way to use that many drives

Wow, what an unoptimized piece of shit that game is.

looks more like the AA was turned up all the way and that tanked the FPS. Seriously 2x MSAA can have up to a 30% fps reduction, 8x MSAA can cut the FPS to a quater.

>you need a GTX 1080 SLI to play at 1080p
wat

>When are we leaving silicon?
The moment these guys start losing some serious money.
You can bet your ass that if they suddenly experienced a 30% decline in profits for 2 years in a row, they'd ditch silicon in an instant.
I'd say give it 6 years max and they're going to be seriously looking into other things, like Silicon-Photonics etc..

>I think anything more than 6 cores will be pointless.
Shame, would have been pretty funny to put it to the test.

It's surprisingly hard to get anything to scale to 32 cores.

>if you have a nigger load of drives, get a few raid cards, its the only correct way to use that many drives
that's a funny way to spell HBA

I work at Intel. They couldn't into 10nm so they were like oh shit, and had to re-release skylake. It's literally the same cpu as skylake, it's just higher ghz cause they can get better and more consistent chips out of the fab at 14nm. They also added in some media decode features.

this. Surprised more people don't get it

of course the benchmark scores are going to be literally identical if you're testing literally the same chip

Man, I remember the fucking hype train for multi core cpus literally being able to split work loads specifically across multiple cores the exact way you mentioned. Fucking shame Intel has kept the bottom line at dual cores so developers don't even feel the need or see the point in properly threading their shit. On the other side of things, you get shit like forza horizon which will slam 8 threads down a 4c/8t cpus throat until it chokes and bottlenecks a gtx 1070. If the future of gaming goes from lazy universities threaded games to lazy multi threaded games amd stands to make a hefty profit depending on where they price their 6 and 8 core cpus.

Lazy unthreaded games*

>Kraby Lake

not even once

At least an old Intel wont become obsolescent for quite a while, right?

Depends. If all I did was gayme, I think I'd be fine with the 3570k. I recently picked up encoding, so more cores will help me greatly. I also build software every single day to be on the bleeding edge. With everything in mind, Zen is very tempting. I'll be waiting until Summer 2017 to make a decision. By then, most of the drawbacks of Summit Ridge will be out in the open. I'm really hoping for at most $600 for Zen, I really want it.

multi threading is hard. 8cores is about the max that any desktop/work station would need/want.

that would work too, but generally you want data redudancy.

One would think that a technology board would know that very few problems are en.wikipedia.org/wiki/Embarrassingly_parallel
But keep spouting that 'om g lazy devs why don't they just use 8 cores' meme idiots.
I bet

>that would work too, but generally you want data redudancy.
which is why you use ZFS

Lel Intel finished

AMD 4 life

have a render takes 100 hour, u get this cpu and save 1 hour do u really fuckin care nigga ?

ZFS is pretty much raid, but he will still need a raid card, even if he just uses it for those extra sata ports.

>but he will still need a raid card, even if he just uses it for those extra sata ports.
that's a funny way to spell HBA

> Stuff taking more than 10 minutes
> Not using a render farm/distcc/buildserver

People aren't really this stupid are they? Why on earth would anyone do this sorts shit on a workstation?

Sounds like there going amd route? Didn't amd say something like this and we were stuck with the fx series for like 2-3 years?

Intel have been doing this for almost six years now.

This is why you get only 30% speed increase in six generations (i5 2500k --> 7600k).

On the other hand the majority of laptops have moved from 35W to 15W CPUs while still increasing performance (i5-2410M --> i5 7200U).

Buy a hexacore Zen

You mean octacore

Autism

Well atleast Intel is releasing something consistently while amd released nothing for like a while or so.

> What is sarcasm?

>what are APUs

Review sites don't bother with them because they don't top performance charts but AMD has made some massive advancements for their low power chips. Its why everyone is getting excited over the zen based APUS due to AMD's igpu tech effectively butchering Intel's - once the cpu side catches up Intel physically cannot compete on that front.

Hey guys my phone won't let me delete some photos off it. I got then off Google drive when I reset my phone as a auto backup dump. I already tried deleting them off Google but it had no effect on my phone. Also the photos have no files? What Fuck??

This. APU's are cute cute cute

>In the near future freaking Apple could dominate the CPU market for both Smartphones/Tablets and Laptops.
Stallman save us!

It's not even a money issue. It's putting all that time into a new build and then not even getting 10% single-threaded speed improvement over the old machine. (You do build your own machine, right?) But, mostly, it's about the fact that slow-ass CPU-bound tasks are going to remain slow-ass for next goddam decade, with no relief in sight.

And what makes it even worse is all this software that's still single-threaded after all these years. It's still exceedingly rare for apps to have a nice -threads option like ffmpeg has. Thanks for the multiple cores, Intel. Now, how about some software that actually uses them?

>Using a CPU for 10 years

Why do people like you who hardly use their PCs for fucking anything bother coming to a tech board?

Ubisoft games, my man.

Intel btfo

>i was only pretending

>Why don't hobbyists invest millions?