I keep finding articles about how Moore's law is ending and that we're reaching the practical limits of silicon as a material. Is this true and if yes, what's the likely next step for CPU technology?
I keep finding articles about how Moore's law is ending and that we're reaching the practical limits of silicon as a...
Other urls found in this thread:
en.wikipedia.org
twitter.com
>trying to have an actual discussion
>Not talking about consumerism
We're not there yet, but dangerously close. The most promising technology are photon processors, using light instead of electrons, which would solve a lot of issues in processor design.
It's promising because such CPUs can be made using existing hardware, light/electron conversion is feasible for compatibility with regular components, and most importantly there are already working prototypes; even so, it's probably a good 15-20 years off.
semiconductors made out of materials other than silicon, like diamond or carbon nanotubes
also quantum computing
>Implying
>Implying
>Implications
Probably writing actually good software should be the next step
But we'll have to fight stuff like OOP for that to happen
Moore's law was idiotic to begin with. But we're far from done. Quantum computers are coming up.
It's not just silicon as a substrate, it's our ability to use ever-smaller photolithography on anything.
We're already making transistor features substantially narrower (~20 nm) than the etching light wavelengths (~200 nm = ultraviolet), which requires a lot of trickery like immersing the chips under water, computing weird diffraction patterns, etc.
Making shit even smaller will require an increase in the number of masks/exposures per die layer ($$$) and/or extreme ultraviolet light wavelengths. the latter is a nightmare to get working and is eating up billions of R&D bux as we speak. AFAIK, the issue are:
> EUV light generation is extremely inefficient
> lenses don't even work at those wavelengths, so you need custom dielectric mirrors with curvature and layer thickness control perfect down to the nm level
> - said mirrors still each absorb a quarter of the incident EUV light anyway
> - so they need active water cooling - that can't be allowed to induce vibrations under operation
> - and they still erode rather rapidly anyway
And as difficult as EUV apparently is, I have no idea how x-ray wavelengths could even be possible.
It not even clear yet that QC isn't snake oil.
My personal prediction: transistor shrinkage will effectively stop, but we get better enough at new gate geometries and chemistries (e.g., "gate all-around") that power efficiency keeps improving for a while longer. Chips will move to 3D stacking and will include broader specialized acceleration logic blocks, most of which won't be used concurrently.
This reality is a computer simulation based on light technology. It would make sense that we are heading in that direction.
The cost hits far harder once you get closer to the physical limit of silicon to the point where the economics don't make sense.
450mm wafers were supposed to be the natural extension to bring down costs, except that the lithography costs didn't scale as nicely as people wanted due to slowing revenue growth.
This is unironically why capitalism is a failure.
>inb4 'merica
NASA could not have existed without a socialist mindset and a literal enemy-of-the-state immigrant
'nough of this Sup Forums shit. I want my GPL-loving Sup Forums back.
Moore's Law ended long ago. At least if you take the common formulation that performance will double every 1-2 years, and not the stricter formulation that transistor density doubles every 1-2 years. If you subscribe to the latter, well, yeah, it's almost over. We can go 3D and start stacking transistors on top of each other, but you quickly run into a huge thermal wall. Even if you're willing to have a chip with channels in it for liquid cooling, you're gonna have to accept much higher power draw, which a lot of people in a lot of applications will balk at. That's gonna be what the end looks like: The performance you get scales with the amount of power you're willing to use. Less of one, less of the other, and vice versa.
You're also running into limits more fundamental than what the lithography can do. Even if you could make a chip with a feature size of a few atoms quantum tunneling would stop it from working like chips do at larger scales. We're already close to that limit, I thought it was one or two process nodes away.
Well the GPL is founded on the capitalist concept of private property rights, you know. If you have strong rights to dictate how your property is to be used, you can specify that it must be made freely available with source and cannot be distributed under a different license. It's a very capitalist construction, its just being used for different ends than normal.
Also poor return on investment is a bad thing whether its a private corporation or a public government doing the investing. The former just wastes shareholders' money, the latter wastes everyone's money.
Sup Forums is not one uniform opinion on anything or anyone. Never has been, never will be, cuckboi
>It's a very capitalist construction
Capitalism is an economic system based on private ownership of the means of production and their operation for profit.
for profit
for profit
for profit
en.wikipedia.org
Well no one will do it for free
>private ownership
we've established that. If you (or your organization, etc etc) owns the copyright, you have property rights over the software. (you can dictate how it is to be used, modified, disposed of, etc)
>means of production
I think it's arguable that software counts as a means of production, considering how much economic value is tied to technology these days. Certainly things like operating systems and compilers count.
>profit
The property owner gets to decide what counts as profit though. Sure it can be and usually is money, plain and simple, but it doesn't have to be. You can decide that profit means "social capital", being a respected public intellectual. RMS certainly seems to value that, considering that he spends so much time traveling to places and giving speeches.
Something else that's worth noting is that the FSF considers licenses containing restrictions on commercial or for-profit use to be non-free and incompatible with the GPL.
Also worth noting is that software isn't economically scarce. Once it exists, it can be copied for essentially zero marginal cost. In that sense, its natural market price is zero.
Moore's Law is technically about transistor count, not performance.
The more relevant but lesser know Pollack's Rule is why we're fucked and just living in the MOAR COARS/MOAR iGPU era:
> performance increase due to microarchitecture advances is roughly proportional to the square root of the increase in complexity
Also, we're not quite at the point of gate widths being the primary and unavoidable source of tunneling/leakage.
FD-SOI works by thinning the amount of silicon in a channel and under the sources/drains, but it's unfortunately so expensive to make the wafers that virtually nobody wants to use it.
Moore's law will never die. Each author's interpretation is different according to the point they want to make.
Its bogus claim by Intel.
Moores law "ended" when AMD was pushed out of the market. This is what created the era of 1-3% IPC per generation.
Moore's law is certainly not dead, its only been delayed by the Intel's anti-consumerist practices.
Why do you think Intel released 2c4t cpu for
>he thinks there is a moore's law
>people actually debate if he's right or wrong
Prove to me Moore's law is right, buy only using food or car analogies
You don't think it has anything to do with the end of Dennard scaling? Blaming Intel for AMD being incompetent is pretty ignorant also.
>just living in the MOAR COARS
I would say more so due to the breakdown of Dennard's law, because while transistor density is increasing the ability to scale frequency is pretty much screwed due to leakage.
It's all true, what you wrote, but Lithography is not really the limiting factor.
The absolute limit is reached, when it isn't statistaclly ensured anymore, that there are electrons in the gate. we are at the point, that there in average only a one digit number of electrons in a modern transistor gate, So the end is in sight.
Another limit is the produced heat, because at some point the heat is produced in such a small area, that no cooling method can prevent the chip from dying almost immediately.
Alternatives:
Quantum computers.
Like you said, maybe snake oil and they will never be used for personal computers (QC will never work at room temp)
Light based:
At the very beginning, but plausible. My bachelor thesis was about polaritons in a AlGaAs structure and this kind of things work at least at room temperature and with easy-to-handle materials. My advisor has published a paper about a polartion transistor, but in general people researching on that topic are more interested in finding better ways to integrate information processing (with electrons) and information transfer (photons, optical fibre)
Spintronics:
The idea is to represent the information with the spin state of an electron. This is probably the best alternative so far, but there are also pitfalls. The discovery of topological insulators lead to a hype in spintronics.
Memristor:
A device that can process and store information at the same time (transistor+memory). A guy predicted it 40 (?) years ago, because of symmetry considerations in electrodynamics. There same promising structures out of metaloxides, that could act like a memristor.
With memristors hardware architecture would be much more efficient
...
>the MOAR COARS era
Not a bad era to be in. Eventually we'll get a 64 core CPU and you'll be able to do insanely awesome shit on it with any parallelizable problem.
>the heat is produced in such a small area, that no cooling method can prevent the chip from dying almost immediately
Could happen at 6nm (or 8? I don't remember the number) (real) gate length
Dennard scaling is minor issue. One fixed by focus on multi-core architecture.
Intel failed to bring this to the table and stick around with their dual core for 10+ years.
In a healthy competition, either Intel would have brought more multicore to low end market and/or they would have acquired better multicore architecture support and brought it to lower end to keep the performance increase in relative numbers.
They failed to do both and it showed. They are only now beginning to do this because of healthy AMD competition coming from Ryzen. If Ryzen succeeds, I will guarantee you, intel will move their tier down a bit so their CPU will be more affordable to low end market.
You don't seem to understand GPL or capitalism so I'll ask you a more friendly and less intense question.
How far into your first economics class are you?
What about graphene or carbon nanotube computers? I hear they'd produce less heat so you could get away with much higher clock rates.
ironically, open source is both commie and capitalist as fuck.
Satisfies both the communalist, volunteer, "we don't need no money, fuck the corps" itch, and smithian "LOWER PRICES, HIGHER QUALITY, anyone can be an antrepeneur, since only skill and competition matters, and inneficient or that have gone to shit solutions join the dustbin".
It's a Libertarian Socialist thing where you build socialist institutions from the ground up rather than having a big centralized state do it.
But to be fair, every time it seemed that Moore's law was not going to hold, people changed the number of months.(Moore said approx.12 months. Know wikipedia says 2 years)
or shitty megacorps.