The future of processors in home computers

I was reading about Global Foundries' new 7nm process and with their promises of 5 GHz clock speeds being possible on the average chip and 60% reduction in power consumption. This got me wondering about how much further we'll see the processors used in home computers advance and how much longer it will be until processors have advanced to the point where most people can easily afford more power than they'll ever be able to use. How many people would really be able to find a use for something like a future die shrunk, cheaper AMD Threadripper that they can overclock past 5 GHz given adequate cooling? Would consumer applications even be able to find a use for something like that, even with more bloat?

Info on Global Foundries' new 7nm process that I was reading:
globalfoundries.com/sites/default/files/product-briefs/7lp-product-brief.pdf

Other urls found in this thread:

youtube.com/watch?v=ny5qenUddY4
twitter.com/NSFWRedditImage

>the point where most people can easily afford more power than they'll ever be able to use.

We're already at this point, there are some caveats though. The average person only uses a computer for youtube, facebook, email, skype, playing browser games, and other unintensive things like that. Even a Sandy Bridge i5 is a ridiculous amount of performance for such paltry workloads.
The caveat is that new workloads have a way of emerging, and even if they're slow to adopt, they will become commonplace eventually.

In a couple years we'll be packing the performance of a Ryzen 7 1700 into a tablet, and that seems pretty outlandish from a performance perspective, but in a couple years some new "killer ap" might come around that takes a ton of power to run that everyone wants to play.

>The caveat is that new workloads have a way of emerging
How many of those have come out over the last decade? Video games are the only common use that I can see possibly taking advantage of that amount of power, but even those will likely hit a point where they can no longer make use of a more powerful CPU.

4K video playback is smoothest if you have an ASIC dedicated to decoding it. People love the whole "VR Experience" thing, not so much for gaming, but the mobile VR stuff like Samsung's headset, or Goggle cardboard. All those ARM SoC's have dedicated hardware to processing that in their display DSP.

These aren't incredibly intensive, but they are signs of things to come.

>4K video
>VR
The additional workloads for those depend on the GPU.

Dell released an 8K display and today with most powerful cpu and gpu that exists you can't max shit out in games in 4k and get 60fps.

I honestly hope VR or some other heavy performance shit gets adopted by normalfags so that we finally get more power for cheap

Cool, but most people don't game, especially not at that res

That's not the point you stupid gizmo. OP said we're close to the point of having too much power when we are clearly not.

I don't see anything else on the horizon that really demands appreciable performance. Its not like the average person has their own cluster for simulating weather patterns. With digital assistants like Siri everyone is offload absurd amounts of processing to a server somewhere else, they're not doing any processing locally.
Though that sort of thing *should* be what we're using our spare processing power for. A digital assistant with the language skills of IBM's Watson that can answer any reasonable question without datamining you.

By "we" and "home users" he meant facebook normalfags
The people who buy 1366x768 atom laptops and are happy

>8K
You need a larger monitor and need to be sitting rather close to even see an advantage over lower resolutions. The average person with good eyesight would stop being able to discern 8K resolution from slightly lower resolutions if they sit further than 12" from a 32" 8K monitor. If 8K gaming happens it'll be because companies have actually run out of other relevant shit to push.

Moore's law is dead.
Developing EUV was an engineering nightmare, with not much for future prospects except for memes like directed self assembly.

Don't expect much gains in the near future.

No, humans can tell the difference between 4 and 8k. We also can see up to 240hz. We have to max shit out at 8k and get 240fps before the human body becomes a bottleneck

>7nm FinFET with immersion litho in 2018
>7nm FinFET with EUC in 2019
>5nm GAA nanowires in 2020~
>implying there won't be further refined GAAs and other transistor topologies after once EUV is totally ubiquitous and mainstream

Engineers keep on engineering, man. If theres a dollar to be made and a problem to be solved there'll be engineers drinking 2 pots of coffee per day until its done.

You're insane in the brain if you think we can go lower than 5nm using silicone.

Anyone who'd try to make an IC out of silicone is a real tit.

I think I'll have to agree with that notion. It will be interesting to see what the next big thing will be though, maybe AR on a much bigger scale than ever seen before? Or maybe the consumer market will stagnate and instead of performance they will focus solely on efficiency and production costs, making technology truly ubiquitous. IoT on a grand scale. I hope it wont end up with the sole purpose of enabling sloppier software development practices than ever before.

You need 128gb of ram if you want smooth scrolling in Chrome

Perhaps with the advent of more advanced man-machine interfaces (think neural link) we could see a huge surge in processing power required. 8k 240fps is smalltime.

7 nm is just smaller fins, try 5 nm for an actually different process.
youtube.com/watch?v=ny5qenUddY4

>muh Moore's law
Moore's law being dead doesn't mean advances cease, it just means that they take a bit longer than the time Moore observed it taking decades ago. Also, even when we do inevitably hit a wall with how small the feature size on chips can be, there's still the possibility of increasing yields and bringing down the costs of higher end chips.

Nobody can even make EUV machines except ASML.

And even they have no idea how to improve on it in the future except with aforementioned DSA (which has never even been proven to be possible)

Face it: computer chip development is hitting a brick wall.
Just be glad we had such a good run in the lest 30 years.

Problem with that is it will cost MORE per transistor instead of less.

Dye shrinks used to be about cutting costs, with better performance as a nice bonus.

>how much longer it will be until processors have advanced to the point where most people can easily afford more power than they'll ever be able to use
We've long since passed this point.
The bottleneck that now remains is storage. It's been storage for over a decade.

That's why the NAND revolution means so much. The latency and throughput we're seeing now was previously unthinkable. And that's with hindrances like the PCIe bus latency and storage stack involved.

There's yet another leap on the horizon: true Storage Class Memory, attached directly to the CPU. The closest one on hand is 3D XPoint, although that's going to remain datacenter-only through 2018 iirc. Besides Intel/Micron, Diablo and IBM are working on their own variants of SCM, last I heard.

But even today, a Celeron and a NAND SSD will keep virtually everyone happy from a usability standpoint.

>Nobody can even make EUV machines except ASML.
What point are you trying to make with this? This doesn't matter in relation to anything. ASML is making them, and foundries have them. They're being manufactured and they work.

As I said before: engineers keep on engineering. As the months go by they'll get higher energy output and longer bursts for more consistent edge uniformity in etching. The process will improve and be refined because people are ever at work on it.

>videos games
>actually hitting a point where they are optimized
lol you wish, the stronger avarage cpu gets, the less optimized will everything become
not just games either, browsers, webpages, operating systems and so on

>GPUs don't benefit from a 7nm process
>the GPU and CPU aren't on the same integrated circuit in the majority of computers today

If you look at the OP, you'd see the thread is specifically about CPUs. I didn't say computers in general because I know GPUs in particular still have quite a bit of room for improvement and would need to be quite a bit more powerful to satisfy the people who want 4K 120Hz VR for their games.

/graphene/ when?

Not being able to resolve individual pixels does not mean you can't notice a difference.