After Si?

With tick-tock dead and μarch-to-μarch improvements of ~10% or worse, when's it game over for silicon? 10nm? 7nm?

And what comes next - III-V, CNTs, graphene?

Other urls found in this thread:

youtu.be/scF9-Qnacmc?t=135
theoverclocker.com/9-ghz-barrier-falls-hard-amd-hits-milestone-with-9062-mhz-validated/
youtu.be/nm0POwEtiqE
arstechnica.com/science/2016/02/is-d-waves-quantum-processor-really-108-times-faster-than-a-normal-computer/
twitter.com/NSFWRedditImage

STACKING will come out before graphene will ever be capable of mass production, at which point nobody will care about the barely better performance of graphene

7nm is the end.

3d structures are the future.

Afterwardy carbon nano tubes or graphene.

Then quantum computers and more exotic materials and phenomena.

>Then quantum computers
Not happening if you have any clue what quantum computers are.
They're pretty much worthless for anything consumers care about.

quantum computers good for breaking public-key cryptosystems (Shor's algorithm) and, to a lesser extent, symmetric crypto (Grover's algorithm), as well as some optimization problems (SAT maybe?). Other than that, most tasks that people care about will still be done on classical computers

Kek, yeah and cooling won't ever be an issue with stacking chips that run at ~100W.

Pretty sure Intel started using 3d-transistors the last time they hit a wall.

we aren't getting more than 10% improvements because there is no competition.

Just gotta wait for RISC-V. They aren't carrying around decades of legacy, so they should be able to deliver solid performance per watt.

I'm unfamiliar with the area but wouldn't the Quantum DFT offer improvements to many classical algorithms?

To that end, would it be possible to combine the operations of both classical and quantum computing? As in, I buy a quantum computing card as we do with gpu processing.

>As in, I buy a quantum computing card as we do with gpu processing.
We're several breakthroughs away from having any idea how a quantum computer could possibly be made to fit in your hand at room temperature, so don't expect to plug one in your PCI slot any time soon.
It would not be surprised if it turned out to be impossible.

Which means basically nothing to people who use and need software that uses existing architecture. That's just about everybody not on mobile.

This is the age of open-source, there's no such thing as software that uses a particular architecture. There's a billion lines of software packaged in Debian just a recompile away.

Just like the transition from ARM to ARM64, and easy recompilation to Intel? That's Android, iOS as well as Windows Phone.

Or are you talking about how the Raspberry Pi with its ARM architecture runs all that software created for x86? Or how it runs Windows 10? Or how Windows RT, recompiled for ARM, featured a full MS Office suite?

Or perhaps Adobe's software for the iPad Pro?

For up-to-date software as well as old open source software it's easy to port. For old, proprietary software linked against Win32, you'd just need Windows to support the new architecture.

SiGe

They're pretty much worthless at all

>runs all that software created for x86

Wait what? How?

This is a friendly reminder that you can run any binary on any architecture as long as qemu supports it, without even recompiling.

Silicon is already at the limit, doubling the performance will take multiple years already. Optical chips are probably the next replacement but it will still take some years and wont give much more improvement over silicon. Superconducters are still to expensive and would be on par with optical while being harder to make.
Usable quantum computing hasnt even been proven possible and would be useless for most current aplications.
Tldr: we need to improve software now cause we can no longer rely on hardware getting better.

Fuck all this 10 nm, 7nm bullshit, all we really care about is clock speed. Thing is, we could do 10 GHz today, but we can't because things get too hot. What we really need is better cooling to get higher clock speeds

That's a completely retarded statement. I don't think you realize just how stupid you're being right now.

Ask again in 2023 when Moore's law will be renewed.

Recompile for ARM.

You can even recompile everything for a specific x86 architecture. All made easy with modern compilers!

He talks about android apps which are written in java which means they can be run on any processor.

Recompiling doesn't fix the fact that the underlying hardware is proprietary, that every SoC manufacturer cooks their own soup when it comes to booting a SoC which means device specific os images and that the manufacturers don't want to open source their drivers or documentation.

>10Ghz
You can't go faster than light, son.

hardware accelleration for common things like gzip compression/decompression

You have to go back
LN2 struggles to get over 7/8

emulator just like when apple switched from PowerPC to Intel

How would that be a problem for the open MIPS?

They should just make EFI standard for arm

I know that some arm devices already use it

nanowire, though i think the benefits are marginal?

>emulation
Spotify needs more than a minute just to start
youtu.be/scF9-Qnacmc?t=135

>youtu.be/scF9-Qnacmc?t=135
When will this shitty site finally support youtube timestamps.

Can't Germanium be used as a drop in replacement for silicon? I thought they said that would be good for now, so they don't need new fabs like Graphene.

Building new fabs doesn't matter. Every process improvement requires a new fab anyway.

It is a hell of a lot cheaper to modify existing fab tech for germanium than researching how to build a new graphene one.

serious question
why don't we just make processors bigger
I'm not talking about using the 32nm manufacturing process again, I mean use the 14nm processor, just make the physical unit larger

you could fit a larger amount of transistors in there or you could use the added space for heat dispersion

but then I guess the square-cube law comes into play and the heat rises exponentially as opposed to ...algebraically?

linearly, you're looking for linear there.

thank you wow I feel retarded

The problem is latency, the bigger the circuit, the longer the signals have to move, and the slower the whole thing is.

In 1 nanosecond a signal can move about 30 centimeters.

larger dies mean more errors per die which means less yield
If you're lucky you can just disable the defect parts like cores or cache but sometimes you have to throw the entire die away.

O'rly?
theoverclocker.com/9-ghz-barrier-falls-hard-amd-hits-milestone-with-9062-mhz-validated/

What happened to TrueNorth?

no worries user, we all get brainfarts every once in awhile.

Think AMD could fimally beat Intel's Core i5 2500 once Zen has achievable temps of zero kelvin?

Zero Kelvin would mean the CPU is off...
Learn yourself some physics.

So never.

>ALL that software created for x86

I'm pretty sure there are a lot of proprietary software created for x86 and recompiling is not really a feature.

>running x86 software via emulation on a device that cannot beat P4s

I thought of some sort of changes in the architecture that allowed running x86 software without emulation and whatnot like the transmeta CPUs for some reason. If emulation counts, then I can say that avr microcontrollers can run every arm code too.

youtu.be/nm0POwEtiqE

k so where's my mainstream 7ghz processor

Graphene is just a meme.
Nobody is seriously developing it.

Directed self assembly is the next stage after euv.

Because that would make the chips more expensive.

Raw wavers are expensive, and larger dyes take longer to produce.

It's all about the cost per transistor.

>exotic

Crystalline computing when?

When you stop being a fucktard. Kill yourself.

what about cooling in 3d?

a properly designed 3d cheap would generate the same the heat as an equivalent flat chip.

As a substrate material silicon will be around for years yet. The industry doesn't want to make a $50 billion transition to an entirely different material. We'll be seeing more heavy use of SiGe which is a nobrainer. From GloFo at least we'll be seeing SOI return to the mainstream computing market after a few year hiatus. Behind the scenes in bulk there is already an immense amount of work being done utilizing new dielectrics and insulators to get more performance out of existing design methodology and tooling.
Gate structures and substrates are entirely different issues with only a little overlap. Most transistors can be implemented in any medium.

There will be 5nm bulk silicon on the market from intel, Samsung, and TSMC. They might stay here for several years while focusing only on back end scaling and improving various parametrics generation over generation. Similar to how intel just publicly announced their 14nm+ process.
If you're asking what is planned beyond this point the answer is that nothing is yet. You can't pick a winner when the horses aren't even registered to race yet.

And intel's X86 core architecture has literally no reflection on their process tech.

This post is terrible.

Stop embarrassing yourself.

Why do you post this in every thread?

You don't just add transistors to something, they have to serve a purpose. Making a core wider past a certain point doesn't improve serial throughput, like everything else you're governed by diminished returns. The over all limiting factor in a core's throughput potential is how fast you can feed instructions to the ALUs, and how fast data can move through the cache hierarchy.
Feeding instructions faster and without misses or stalls is the hardest thing to do.

IBM should just drop the front and admit that they're time traveling DARPA foot soldiers preparing to fight the alien menace.
Their DNA structure logic and interconnects are too crazy to be real.

Silicon Carbide has decent properties that might be useful for IC manufacturing in the future.
Bonus points for SiC wafers being translucent.

so it's ok to make smaller transistors to pack more of them on a die, but it's wrong to make bigger dies?

The problem I see is that the chip wouldn't generate and distribute the heat uniformly. What about the structures in the middle of it?

Anybody seen this?

arstechnica.com/science/2016/02/is-d-waves-quantum-processor-really-108-times-faster-than-a-normal-computer/

I dont know anything at all about QM, but this thing sounds interesting. Does the d-wave even count as "quantum" computer?

lol

TIME CRYSAL QUANTUM COMPUTERS

This is Sup Forums in its essence