What the hell will processors become?

So with moores law, and thermals being issues, our processors have sorta hit a cap. I mean they can, and will, become better but in all reality they wont get that much better at all, unless architecture improves and what not the only viable upgrade would be quantum processors right?

I wanna hear people predict when quantum processors will be consumer grade? super out there question

Other urls found in this thread:

spectrum.ieee.org/semiconductors/materials/graphene-the-ultimate-switch
youtube.com/watch?v=S9CK8eNhBKo
youtube.com/watch?v=cI-a4WZWwZc
hexus.net/tech/news/cpu/110693-on-chip-photonic-synapses-herald-new-age-computing/
news.mit.edu/2013/computing-with-light-0704
twitter.com/AnonBabble

probably transform into something we'd have never even expected or predicted

aside from this prediction of course

>replacing my 2008 intel i7 920 with a 7700k i7
>fucking nothing is different

We already hit the cap for normal usage

>quantum
No, those are only better for sets of problems you're probably not doing. Next step will probably be switching from silicon. It's just going to be expensive.

Next step is dropping precious metals and switching to graphene.

At least it's lower power.

Bigger dies

not to a meaningful degree you'd give a fuck about unless you're incredibly poor

Optical logic. Then multi-band optics. In 40 years your small pocket computer will have more compute and storage than a modern datacenter.

Moore's Law isnt an actual law, idiot. Its just a way to up production

The """"""law"""""" was debunked long ago. What kind of retard doesnt understand deminishing returns

Im glad I did not make the same mistake, and stuck with the i7 ivy bridge

I agree, if we can find a way to store quantom data then maybe that, but as of now it looks like were going into nano optical logic and pretty much just using mini lasers

I heard something about going to light-based computing with fiber-optic traces and whatever but I'm not as tech literate as I'd like to be so I don't know how viable or practical that type of solution would be.
It seems like it would give us a shit ton of thermal headroom to play with though.

>believing something can be increased to infinity
>normie memes

Single thread performance more than doubled. Overall performance is almost three times. I know, we expected more for 10 years, but still something is different.

>graphene
nah man. graphene for cpus were dropped a while back.

The next big step is really cranking up the IPC and dropping power usage through VLIW or something similar. Think Transmeta Crusoe but not shit. After that we're about tapped out for process improvements on CMOS, so it'll be some huge, radical change like the move from minis to micros.

(((they))) have a monopoly so there's no point in getting much better, only a little so that idiots will buy the same thing every year. At least AMD is giving them a run for their money and we might see some further improvements soon.

Quantum will never be consumer grade. Traditional computing is more practical for day to day usage. People need to stop treating quantum computing like it's "magic computing." It's better for certain use cases but significantly worse for others. Even in the use cases that it's useful for it's not perfect.

>The next big step is really cranking up the IPC and dropping power usage through VLIW or something similar.
Itanic is still sinking to remind you that VLIW *never* works for CPUs.

This year showed how much improvement there can be if there only is some competition.

Processors have indeed hit a cap on speed. However, we'll still see them advance in other ways:
>More cores. Perhaps eventually so many that they'll replace GPUs.
>Lower power consumption. Smaller, lighter batteries, longer battery life, closer to desktop performance in mobile devices, lower power bills and cheaper PSUs for desktops, less heat for all devices.
>Cheaper CPUs.
>More components integrated into the CPU. Cheaper and more compact motherboards.

Quantum may eventually make its way into consumer things, but it's not going to replace the CPU because of how it works. It's better as a coprocessor that gets used for certain problems it's better at solving the fairly few problems it's efficient for.

Itanic sank because it couldn't run x86 programs apart from in a slow emulator, and because the compiler was ass at release. If we can get people less tied to x86, and if the chip comes out with a compiler available that can properly take advantage of it, VLIW could indeed work on consumer PCs.

Itanic released at the height of the Wintel monopoly. Nowadays if Linux, GCC, Go, OpenJDK, and node.js are ported to a new CPU arch it can live or die on it's own merits in the server space.

Oh, look. A retard that knows nothing about EE, physics, nor microprocessor engineering.

>Itanic sank because it couldn't run x86 programs apart from in a slow emulator
Even Transmeta's Code Morphing was pretty slow.
Denver also sunk because of that.
>Itanic released at the height of the Wintel monopoly
What? Itanic released while Intel was hammered to fucking death by Athlon.
And Hammer literally nearly murdered Intel in servers.
If only Barcelona was not a shitfest.

>What? Itanic released while Intel was hammered to fucking death by Athlon.

I think he means Wintel as in Windows+x86. Give people a Windows machine that can't run normal x86 .exes or that runs them really slowly and they won't be happy.

Fuck ton of cores. CPU's will mimic GPU's in this regard. The real breakthrough won't be at the hardware level anymore, it'll be software. There may not even be a defining moment on this either - it'll probably be a slow march through the swamp to standardize all software to be able to do parallel workloads.

We're hitting the physical limit now. I highly doubt we'll see 2x the current IPC anytime within the next 15 years.

A lot of workloads are inherently singlethreaded.

Kinda the same reasons why cars haven't gotten any better but they all get more computers and control systems every year in order to spy on people and control their vehicles more easily as well as hack in to their smartphones via integrated wireless systems people don't know exist.

Play a 4K HEVC video on the 920 then.

>Industry starts pushing for high efficiency code
>pajeets get deported to their country
>C and even assembly turn into the main languages again

YES PLEASE

I can play 1080p 10-bit HEVC on my Core 2 Duo lulz.
Dat i7 920 should play 4k HEVC easily.

>only viable upgrade would be quantum processors
No, the next step is graphene semiconductors
spectrum.ieee.org/semiconductors/materials/graphene-the-ultimate-switch

Graphmeme is unusable for CPUs.

But how small can you go with that then?
Eventually transistors won't be able to get smaller bc of quantum mechanics, isnt this the problem that quantum computing solves? Can we go smaller with them? Idk just asking it here

Graphmeme is worse than asbestos healthwise. Why is this meme being posted all the time?

He already said that if we can't make CMOS smaller, we'll switch the materials.

I believe you're mistaking the dedicated graphics for the integrated one.

But eventually we'll be at the atomic level, and then you're dealing with quantum mechanics instead of the physics we use to make a transistor work, then the material wont matter unless you can somehow replicate a transistors behavior with quantum mechanics or something

You're not going to switch the materials to shrinks xtors.
You're going to switch the materials either for more performance or for changing the way xtors themselves work.

RISC and something better than silicon hopefully.

Ohh that makes sense, thx for explaining :)

HIGHLY doubt it, shitposter.
youtube.com/watch?v=S9CK8eNhBKo
youtube.com/watch?v=cI-a4WZWwZc

Intel mysteriously made improvements from kucklake after Ryzen's release, "the end of Moore's Law" is a Jewish trick by the same company that made Moore's Law

Not the guy you're talking to, but the biggest change will be from a Silicon based substrate material to a GaN substrate material. In my last job semiconductors were made using GaN by the R&D department on small scale. I didn't get directly involved, but at the time the yield was quite low so it was expensive to make. I wonder how well it's going now?

Good tunnel FETs seem like a reasonable beyond CMOS tech to me, lower switching current and thus lower overall power consumption and with lower power you could start stacking transistors vertically and whatnot without frying your chip

If storage can be moved to 3D(as in 3D NAND) can't processing do the same? I've been wandering that for a while.
Quantum won't ever be consumer because of the extremely low temperature needed. At least for the predictable future.
Carbon is more probable but even there there will be a limit. Something must change in the architecture.

3D logic is most certainly a thing, the limitation there is just heat transfer between layers. There are certain types of transistors which are designed for 3D fabrication though, not simply stacking layers like we do with NAND now.

My dumb question is why do we make the packages so small? Like 2" square.

Threadripper is exploring this more. Just make the package bigger, leave the nm architecture the same so you don't have problems on the quantum level.

Does the latency become a problem as the package increases in size? How far do the electrons have to travel that eventually the size of the package gets too big to benefit however fast the silicon will allow?

>Intel mysteriously made improvements from kucklake after Ryzen's release
they barely managed more single core ipc increase over previous generations, the only thing they did differently was making their i5s 6 core (instead of 4c with hyper-threading as previously speculated) and i7s 6c12t which is something their hedt/i9 line was already doing and probably indicates it is just a rushed out gimped hedt/i9 rebrand for the consumer line in response to amd putting out 6c12t for mid tier consumers, hell even their high end i9s now are just literal rebranded xeons with 2.xghz base clocks that likely won't overclock for shit

I agree though, the gimping and binning and holding 6c12t back is all a jewish trick

>My dumb question is why do we make the packages so small? Like 2" square.
convenience and cost, bigger packages require bigger sockets which costs more, bigger cpu dies require a better manufacturing process as flaws in silicon become more costly

threadripper is gluing dies together and it works decently well as flaws in dies aren't as costly but there is some latency issue (although TR handles this better than intel's architecture)

once we hit moors law the progress will come from making dies bigger or finding ways of squeezing more performance in smaller areas - like stacking dies, basically performance isn't going to stop but it's going to require looking at it differently

My money's on Photonic cpu.
They've been pumping a whole lot of money into them for a long while and they already got a dual core light based cpu running last year.

And now they're working on stuff like this.
hexus.net/tech/news/cpu/110693-on-chip-photonic-synapses-herald-new-age-computing/

Photonics just increases the amount of bandwidth a CPU can carry, not speed as even a photonic CPU will require electronic components to make sense of that light

Even if you kick the poos, you can't all kick the mexicucks :^)

Where you been?

We basically hit the single core speed cap in 2003 with 3Ghz processors, everything since then has been squeezing the last oomph and adding extra cores.

Multi cpu motherboards becoming the norm would be nice.

I'd be nice to see PCI-E card GPUs die out in favor of a socketed approach right next to the CPU.

The only reason they hit a cap was because Intel had no competition. Why spend big bucks on research, when you can just spend the same shit with a 5% clock bump every year?

Once Zen performance numbers got out, that's when they started working on an actual new architecture.

It's a public forum. Feel free to enlighten the world.

This is stupid. Processor speed increases have been slowing, they are now two orders of magnitude slower to increase speed than they were 10 years ago. I mean yeah, intel are playing stupid business games, but we're talking a couple % difference, who gives a fuck.

There are plenty of paths forward. One of them is more parallelization in software to allow use of GPUs or such instead. But there are many ways to improve single-thread performance too. Migrating away from a silicon substrate to something better for example, or maybe ditching electricity all together and building optical processors instead.

CPUs have become like jets. For most people they're basically good enough. Innovation will focus on efficiency and cost, not speed and core count.

Think about the connections between cores though. Photonics to connect all the cores together is significantly faster.

Except the state of silicon tech has nothing to do with "good enough" and everything to do with physical limits. The tech has matured. You bet your ass we would have a fuck tonne of uses for CPUs running at 300GHz...

really? electricity moves at light speed right? light will probably be moving slower than that. so why would it be faster?

oh right because of bandwidth I am stupid

So basically no we won't have the power we need to make Vr awesome enough for it to replace conventional video games?

0ms quantum internet when

Only because we don't have a full array of optical equivalents of basic electrical components yet. Once we do, no electronics at all will be needed. And that would equal a huge increase in speed and power consumption, not just bandwidth, because the inherent slowness of electronics caused by metal wires having an impedance would probably practically go away.

Electricity doesn't move at the speed of light. In copper wire, the speed generally ranges from .59c to .77c, and in substrate silicon and polysilicon it's even slower.

we might. foviated rendering and optimised code will decrease the power required by a huge amount.

I don't know why I thought electricity would move at vac light speed in any conductor. I think that i heard the speed of signal thought conventional copper cable is faster than optic fibers because light is slightly slower in glass and then made a bunch of assumptions.

>mfw I'm still on my 2500k and doing everything I want with 0 problems at 1440p and you faggots are still trying to figure out if its worth it upgrade from your 6700/7700ks to whatever new garbage intel puts out

literally lmaoing at all of your lives

You are delusional if you believe this.

Are you saying all the benchmarks out there are lying? The 7700K is objectively at least twice as fast. Not to mention it uses a shitload less power per performance unit.

Nigger 500 picometer transistors have been showed to work in experiments. "Moore's law is ending" is bullshit

Shown to work is very different from being useful. At those sizes the leakage is so large that the power consumption is through the roof and reliability and durability is very, very poor. Maybe this can be mitigated but definitely not fast enough to keep Moore's law alive.

You're hilarious, obviously you haven't done much research on this subject.

I'd say the latest architecture Intel uses surpasses Nehalem (first gen i7) by 30-40% with regards to IPC. Definitely not 100% like you're talking about, at least not with your average workload. Maybe some specific workloads would benefit from the newer platform that comes with the new Intel CPUs, since faster memory is supported on newer platforms, but I think that is only an issue with a very limited amount of workloads.

Really? I noticed an improvement going from an i7 870 to a 4790k, though I'm probably using programs that benefit from AVX and AES-NI. Bit of a shame that it released before DDR4 became widespread, but I don't particularly need the increased speeds.

>you will never have a GOAT quad-processor machine pumping 20+GHz soon
For now,I'll just look into tethering two PC's together ala twin-engined car.

>trust muh benchmarks goy

The processors we have in home computers would be more than enough if we didn't have so much bloated software.

Not gonna happen,even though I want it to.

AMD already figured out that to make it smaller you have to make it bigger.

>believing something can be increased to infinity
only if time was infinity

>citation needed

optical circuits
optical cpu

cooler, faster, more bandwidth, vacuums and lasers?
etc.

news.mit.edu/2013/computing-with-light-0704

>I wanna hear people predict when quantum processors will be consumer grade?

I would say the processor is the least of the issues reaching economies of scale. The cooling system needed to make it work + the shielding is. So when those technologies improve to become cheap/small enough, then sure. It will probably still cost you few 100 grand.

Didn't they say the exact same for tranditional computing? Look at what we have gone through in the last 20 years

Traditional computers are faster than quantum computers for most things though.

There are a lot of things preventing quantum computers from seeing the same level of innovation with quantum computers as we have seen with traditional computers not the least of which are the laws of thermodynamics. Right now we're working our asses off just to get them to be successful at their current assigned workloads more than 40% of the time. Beyond that the way they work isn't necessarily better than how traditional computers work. It's just different. They're great for cracking encryption but for damn near everything else they're pretty shit.

Convince me to buy a 8700k over a 8600k.
>$100 for 2fps

All my VM's are on my server so the extra hyperthreaded "cores" won't do anything for me. Games still aren't taking advantage of more cores, let alone more threads.

i5 2500k --> i5 4670k --> i5 8600k

honestly i'm anticipating a major recession caused by the combination of moores law officially ending + trillions of student debt + babyboomers bankrupting the social security system + running out of ideas for new iphones.

Moore's law is bullshit based on feels. Weren't we supposed to hit the limit in the distant futuristic year of 2015? Yet here we are with new CPUs on the horizon.

To be fair Intel did have to figure out a new way to make transistors once they got so small that the difference between on and off was pretty much undetectable. The real hard limit we're up against now is just how few atoms you can actually make a transistor out of.

something similar to the brain which means heavily parallelized until other types of computing become popular.

super multi-core processors for now however.

Moore's law isn't actually a law. Also, it's completely fucking retarded. It will plateau soon if it hasn't already due to physical limitations imposed by the laws of physics. The fact that there are people who don't understand that is baffling.

Neural networks are parallelized? I don't think that is true. Can you provide a citation?

7700k matches or exceeds 8600k performance in certain cases
that would be my reason to not buy an 8600k

kys faggot my core 2 duo can barely play 1080p youtube videos

t. only uses his computer to browse Sup Forums and reddit