Moore's Law

Now that it's over, what's next step for cpu development?

material change

more cores

bibeline

New architecture.
x86 isn't very scalable.
Legacy shit all over it.

replace shitty copper crap on cpus and motherboards with fibre optics

stop making x86

Both of these seem interesting.

Also, what's the law called that goes with Moore's law?

It essentially states that, as the cost of computing decreases exponentially due to Moore's law, the cost of foundries increases exponentially.

bigger chips, which will be higher latency so the architecture has to be radically different

I'm actually surprised this hasn't already been the case.

well instead of doing that we are getting bigger chips by having more cores

both have the same issue, applications need to be written to support them

more cores approach has the advantage that even if the application isn't written to support the cores properly, it at least still works because it's still x86

I'm surprised currently there hasn't been (at least from what I've heard) a materials push.

Apparently, Quantum computing doesn't correlate to Moore's Law and isn't even more effective than older silicon binary computing in normal tasks.

And, due to how it works, it might not even possibly be.

So, larger chips aside, do you think there will be a means within the next decade of going any smaller with transistor sizes?

Moores second law

Oh, that's pretty convenient.

5nm seems to be the limit at the moment, no one has figured out how to make anything smaller because quantum tunneling becomes a huge issue

changing material to allow higher clock is neat but the higher your clock is, the shorter distance the electricity can travel per clock, this means the higher the clock the smaller the core has to be

Graphene

Do you people jack off to cpus or something

Are any companies seriously considering that transition, though?

That's going to require an architectural/form factor change that it'll take who knows how many years to get all manufacturers on board/foundries built.

Why would you browse Sup Forums then come into a thread titled Moore's Law to bitch about people talking about processors.

Writing efficient code.

Nothing, the future and present day of consumer electronics is literally endless iterations of 3rd gen i5 and i7

3rd and current gens are already different sized.

Yeah, at least until something else shows up, good programmers are going to see their pay go up.

Graphene may increase computing power but will technically almost inverse Moore's Law due to the far smaller number of transistors per integrated circuit even if it is universally implemented. We might be able to hit around 2nm or so with graphene before bottoming out without going quantum.

There's going to be a focus on multi-core performance and TDP, especially the latter.

Electronics in general will have lower power consumptions as years go on so manufacturers can still advertise performance increases and IPC improvements when in reality they're in a rut until someone figures out Quantum for the masses.

Surely not!

Thats racist.

>cpus get faster
>software gets slower

it's like pottery

I do, yes

First extreme ultra violet.
Then directed self assembly.

Moore's law and traditional lithography is dead, but CPU development in general isn't, just goes a little slower now.

>Electronics in general will have lower power consumptions as years go on

Because of die shrinks. (which are getting harder and harder)
Not because x amount of time happened to have passed.

Libre processors

>Are any companies seriously considering that transition, though?

Nope.

For starters the ONLY company still currently making lithography machines is ASML.
And they aren't interested in memephene.

I envision something that uses primarily contained light particles and electromagnetic fields to process information, rather than a physical material. Seems like a somewhat logical way to curb the whole density problem

>Open power
>Open sparc

Material change + photonics making their way into CPUs and other parts of the system.
They've been throwing a lot of money into Photonics and it's showing promise as the next step of computer evolution.

I don't think that graphene is going to amount to anything and quantum isn't something that consumers are going to get, since that would kinda kill the cpu market.

>B/W
>Band/Width
the fuck?

Maybe fpga boards are the future. Maybe programable boards, not software. Problem right now is it requires 2seconds to reprogramm fpga for a new task which is way to slow to be used instead of cpu. Heard Intel made processor with fpga for servers and it can handle 300,000,000 tasks per second.

>helpmeicameonmygpu.jpg
You made my day

cannonlake has on-board fpga with 2m gates and multiple arm cores next to normal x86 cores. also uses 3d die to stuff more silicon in the same real estate.

>I'm surprised currently there hasn't been (at least from what I've heard) a materials push.

Silicon is cheap as fuck and has higher yields than pretty much every other potential silicon replacement, even with the node shrinks that silicon is pushing. There's a reason all the GaAs fabs moved to China, the semiconductor companies needed to cut costs.

So the question then is, will any other factors force them to take that step.

Why bigger chips? That inherently puts an even lower limit on clock rate (at least in terms of lightspeed delay, on the other hand if you're limited by heat generation rather than lightspeed delay spreading things out might help with heat dispersal). Otherwise the only real advantage would be adding more complex instructions, and it's probably better to go for simple instructions and implementing the complexity in software. For further speed increases the only practical choice would be asychronous processors, or at least having different functional units sychronized independently - so for example if the ALU or FPU is doing a long complex calculation, the control unit can execute many other instructions that don't depend on those in the meanwhile.

I think the implication that certain races are incapable of writing efficient code is far more racist.

It's time to move away from x86

Sometimes on them, too

>not using your cum as thermalpaste

Optical Computing

Multithreading

moar cache and moar cores

Crashing this plane

/THREAD

> t. AMD

MOAR GIGAHURZ

Either this or graphene. Apparently graphene has proven capable of producing 1nm transistors in width but not length. It'll be interesting to see if that gets worked on. Currently IBM expects to get us to 7nm and will probably be looking into different materials/architecture beyond that.

Graphene also offers a fuckload higher clock speed.

Didn't Petium 4 had this?

x86 is a dead end.

ARM still has tons of room for improvement. Moore's law is alive and well in smartphones.

rethinking the computer as we know it as it has reached its physical limitations

Do you have any ideas yourself?

I'm interested in seeing if quantum computers ever take off, or graphene.

Beyond that, how do you even go into subatomic computing?

Moore's Law is the doubling of transistors on a die. This is still happening. More cores and better architecture are the way forward.

>x86 isn't very scalable
I'm not so sure. x86 architecture is practically high level CISC being translated into a RISC-esque microcode.

I bet we'll get some Vulkan-esque enhancements for programming on x86 CPUs at some point. If that happens, devs will be able to push the limits of x86 microcode (like how Vulkan and DX12 allow much closer to the metal GPU programming) rather than relying on high-level x86 itself.

For the next few years it's GPUs, ASICs and actually optimizing code. Big companies are already baking applications in hardware.

Time to learn verilog.

>Now that it's over, what's next step for cpu development?

Writing applications that can actually take advantage of all those extra cores.

I seriously have no reason to upgrade past 4C/8T. Every time I look at my CPU load, one of the cores is doing all the work and all the other are sitting there idle.

Sure, there are a tiny number of exceptions, like running "ffmpeg -threads 8". But most applications continue to be designed to completely ignore multi-core systems.

The first dual-core CPU was released 12 years ago -- and after all these years, most apps have made no serious effort yet to even start thinking about using those extra cores.

6nm from my understanding is the hard limit with 7nm being a practical for silicon without some kind of amazing insulator.

there is a push, however, the issues are several fold with some of the potential successors is they dont want to stay their material or have short shelf lives.

the way forward is going to be hard and simple at the same time.

computers will become multi cpu, maybe mcm maybe separate chips, but they will go multi.

with the two chips, you will have stupid cores, and you will have strong powerful cores.

now imagine this, you have a simple task that is limited largely by latency, and not by chip power, well, that fucker is going to go to the stupid cores as they are smaller and have less latency.

you then have the strong cores, that manage heavy lifting, these will be traditional cores, and you may have a third set that are dedicated for background applications, just to take them off smart and stupid cores so free up a minimal amount of head room.

stupid cores may also be clusters of many, but shit won't bounce around from core to core like windows fucking does now, because they are latency limited and doing that would fuck any benefit from low latency.

you may ask why not have the stupid be gpu's? well gpu cores are retards, the stupid cores are if left alone able to run programs, they just are not strong.

Do most desktop apps even need these cores, with the exceptions of games or CAD? Most of the time my laptop is doing nothing, it's way faster then needed for websites or playing music. The next compute intensive thing for casual users is VR and that is running on GPUs.

The end of Moore's law (for CPUs) is very visible in data centers, where everything is multi-threaded.

It's there, but more software should utilise it.

>Time to learn verilog.

You gotta wonder if they'll need to start moving toward flash-programmable FPGAs at some point -- at least for the drivers.

I worked on a product where about 50% of the code was Verilog running on an FPGA, and the other 50% was C code running on an ARM CPU. The FPGA took care of running the camera and implementing the vision recognition analysis. We had to do it that way to meet hard real-time requirements. You've got to do it that way if you need the extra performance.

At first, it might be less flexible, but eventually the FPGA firmware will get standardized and included on every mobo. At that point, we can start migrating more and more functionality into the FPGA firmware. Not just for the kernel and drivers, but also including common libraries for apps. Some day, you might even be able to call a C library function like qsort(), and know that the kernel is likely to forward most of the work to massively parallel Verilog firmware in the on-board FPGA. That might seem far-fetched, but we might have no choice if CPU advancement continues to stall this badly for another decade.

>mfw still in i5 3rd gen.
>mfw besides usb transfer speed, everything works smoothly.
>mfw still waiting for a reason to change my cpu...

>mfw still in i5 3rd gen.
I'm in exactly the same situation.

I doubt if I will have any reason to build a new system until the 7nm die shrink that will occur after Tigerlake (which will probably be generation number 13 or 14).

By that point, hopefully the single-thread performance will beat my i7-3770K by over 50%, making it worthwhile to upgrade. (I only care about single-thread performance because I'm .)

Unfortunately, AMD doesn't help me at all because the single-thread performance of the R7-1800X is equivalent to the i5-2550K, meaning that AMD has finally caught up with where Intel was in 2012 in single-thread performance. (Good job, AMD!!!)

>Do most desktop apps even need these cores,

Oh for heaven's sake, of course not. At least 80% of desktop users will almost never max out the CPU load on a 4C/4T machine. And at least half of those would be fine with 2C/2T -- basically the "Facebook machines" and the "MS Word machines".

They added more cores simply because they couldn't increase clock speed anymore.

Then, the vast majority of the software market did not respond at all, and ignored those extra cores.

Stagnation and eventual regression as competent developers are let go in order to make room for "diverse" teams that can't even maintain the technology that's already been developed.

Even if you think that's the case for the Western world, China/Japan have no intention of JUSTing their development teams.

>Eyeglasses that beam images onto the users' retinas to produce virtual reality will be developed. They will also come with speakers or headphone attachments that will complete the experience with sounds. These eyeglasses will become a new medium for advertising which will be wirelessly transmitted to them as one walks by various business establishments. This was fictionalized in Dennō Coil.

Taken from Kurzweil's predictions.

Why? There's literally no point in this sort of technology. Why would you buy hardware with the sole purpose of being advertised at?

>China
Didn't the only child policy killed their future?
>Japan
Don't they way behind because their weirdness

You get two kids in China and are taxed beyond that IIRC.

Japan is weird, yes, but not way behind. They're also very nationalistic in terms of not wanting outsiders to come muddy up their culture.

Hyber bipeline

>Unfortunately, Broadwell-E doesn't help me at all because the single-thread performance of the 6900x is equivalent to the i5-2550K, meaning that Intel has finally caught up with where Intel was in 2012 in single-thread performance. (Good job, Intel!!!)

I think it's time we stop developing CPUs because we can no longer double the transistors while lowering size.

It's over, we need to just give up because we can't hit 2x anymore

Reality is racist.

Kek.

Graphene transistors apparently can bump speeds into Thz range.

Interestingly enough, Kurzweil seems to have predicted the likely next step in computing (larger chips because we can't fit smaller transistors) will happen about 30 years from now.

Don't take this as me saying he's just that good at predictions, though; I think it means he expected Moore's law to hit subatomic transistors or something. Though if people do switch to graphene we can hit 2 and possibly 1nm which would take about the time he guessed.

They'll just make larger CPUs. Think Bulldozer but functional.

NSA has a supercomputer in development with 7nm Graphene chips made by IBM (it's classified).

Intel has 8nm silicon working, but the QC is horrible and they're panicing over it

This.
I hope I will be good enough to get good money

Cache

stacked ranked on-die tiny Peltier coolers all around 1 sanic core

>I'm surprised currently there hasn't been (at least from what I've heard) a materials push.
There is but the hardest part is getting a clean oxide for the gates. SiO2 is hard to beat.

>more cache
Wrong. The performance stabilizes after a certain size. So even 1GB wouldn't improve performance to anything different

Double cores will be the next big thing. It's one physical core that's actually two physical cores.

What?

So, Bulldozer? That failed.

Maybe multiprocessor boards becomes a more widespread thing. Or coprocessors.
Or progress will rely on increasing GPU power.

That coupled with processors being more capable of running on much lower power.

We might see more progress on the side of power efficiency (on mobile devices) and less need for cooling (on desktops).

Or more ARM devices?

I believe there's a limit to how many processors you can fit on a board while remaining efficient.

>Faildozer
Not sure if baiting or idiot

3D chips.

We already have 3D memory cells.
3D chips just needs more careful consideration of design for cooling it.

After that, either graphene / similar 2D molecule or optical.

>Why would you buy hardware with the sole purpose of being advertised at?

It's just like all other technology -- there will be entertainment content that people want, mixed in with ads, and they will make damn sure that it's hard to separate the two. The industry cares only about the ads, but they tolerate the entertainment content to help make it easier to deliver the ads. That's exactly what happened with radio and television.

>Maybe multiprocessor boards becomes a more widespread thing.

I don't anticipate that happening.

Once they started adding multiple cores to the CPU, most users could see that a vast majority of the time only 1 or 2 of their 8 threads are actually in use.

As a result, most users have the attitude: "We appreciate the extra cores, but to be honest, we don't really seem to need them very often". As long as that attitude is common, it's unlikely that multiprocessor boards will become more popular.

It's really a software problem. We just don't have the apps that can take advantage of the extra cores that we've already been given. As long as that's the case, I don't see any force that will push us towards multiprocessor desktop boards.

(The only exception is the server market, where the demands on the CPU are totally different.)

>(it's classified).

Black helicopters has been sent in your location