It's over

It's over.
Computer's are finished.
No more computational progression.
Only 5 years left of progress.
>b-but quantum computer
No they're useless for anything other than encryption and quantum physics simulation.

Goodbye.

22 nm – 2012
14 nm – 2014
10 nm – 2016
7 nm – ~2018
5 nm – ~2021

Other urls found in this thread:

wccftech.com/graphene-transistors-427-ghz/
extremetech.com/extreme/175727-ibm-builds-graphene-chip-thats-10000-times-faster-using-standard-cmos-processes
wired.com/2003/09/diamond/
en.m.wikipedia.org/wiki/Biological_computing
google.com/patents/US6858080
google.com/patents/US5635258
ark.intel.com/products/93790/Intel-Xeon-Processor-E7-8890-v4-60M-Cache-2_20-GHz
twitter.com/NSFWRedditImage

The bottleneck is now on the shoulders of programmers.

The future of computational improvement lies in massive parallelisation. It's the fault of lazy programmers who are still thinking in a single-threaded way that runs everything on top of a fucking JIT VM.

very healthy

>ditch silicon
>processors can now go above 100ghz

>transistor size determines performance of processors
>3D processors will never be made

>No more computational progression.
Why don't you kill yourself?

It's scary that there isn't a real alternative.

No such thing

Only increase in physical volume, that's not advancements.

Show ONE thing that can continue this trend.

And boys, even if we find a new material it wont' last long, perhaps a decade before it reaches atomic level again and then we'll be completely finished.

Have fun having the same computers forever.

Huh, guess it's time to go back to banging rocks together then.

designers thrive with limitations.

faster we get to 5nm better shit we get.

>2021 arrives
>get a ThinkPad workstation
>never have to buy a PC again

>Matrix-like level simulation will never be achieved

I'll get me some rope.

Almost nailed it, but there is still some stuff we haven't tried. Mainly novel processor and memory typologies, like processor fabrics that can access adjacent core's memory, but not any sort of global memory. But yeah, the Moore had a good run, but it's pretty much over.

Too expensive to use other types of semiconductors.

Although this could help with the issue that we can't move information across a modern chip in a single clock because of the speed of light being an issue, we have a hard enough time cooling mostly 2D architectures. 3D will only get us a few layers. Besides, do you have any idea how many layers are on even 2D processors? Like 30.

Butthurt? Complain to the fucking UNIVERSE about PHYSICS.

>No such thing

wccftech.com/graphene-transistors-427-ghz/

extremetech.com/extreme/175727-ibm-builds-graphene-chip-thats-10000-times-faster-using-standard-cmos-processes

Dear UNIVERSE,

>an astonishing 427 Ghz!
Just hype.

It's like discovering photons and believing we can travel at the speed of light.

NIGGER

Plenty of room for improvement at least.

We can all go back to writing C instead of fucking java.

Silver lining?

just read the article.
The most retarded article I've ever read.
You can even see it in the comments.

Quantum Computers.

The use cases you are describing are but the tip of the iceberg in the future. If it can run calculations, it can do anything.

never read prajeet techblog comments unless you need to lose your mind with a quickness. or read them crying about I AM NOT GOOD WITH COMPUTER I WANT TO TOUCH TITY SO BETIFEL SUCC ME from thirsty ass indians and pakis.

>typologies
>making a typo that is literally 'typo'
pure pottery

Ray Kurzweil has said that Silicon is the 5th paradigm and there will be a few more paradigms until the singularity. I think we have atleast another 10 years of solid progress for silicone before we hit a definite wall. We dont have to keep shrinking transistors to have progress. Theres other techniques that could be utlized once we hit 5 nm.


Most of the promising long term replacements for Silicon unfortunatrly use expensive or out of reach materials like Graphene or Carbon Nanotubes.

But once we are able to create graphene and CNTs in sufficient yields and quality, we will have better computer that will be thousands of times better than silicone chips.

>Ray Kurzweil has said
opinion discarded

Hur durr edgy

>AI meme won't happen

It's been nice daydreaming about it, lads.

All x86 CPUs newer than Core2 and Bulldozer are botnet anyway

k

Does this mean my t420 will last me the rest of my life? I'm okay with this

>It's the fault of lazy programmers who are still thinking in a single-threaded way
As long as later operations need results from previous operations, that's always how it's going to be. It's not necessarily the programmer's fault.

Look, fucktards. You know that thing that holds your ears apart? It's a fucking
>computer
And it stomps your Stinkpad in the ground. Why?
>Parallelization
Hundreds of billions of neurons. We've barely even scratched the surface of what is computationally achievable.

whatever happened to ?
wired.com/2003/09/diamond/

in theory yes, but we reached physical limitation, sry.

>10 nm – 2016
shiggy
>7 nm – ~2018
diggy

>If it can run calculations, it can do anything.
the height of stupidity
quantum computers require quantum algorithms, which are not always going to be faster to compute than normal algorithms
the most computational intensive task most computers have to perform is video playback, and sometimes rendering graphics for video games

we might see quantum co-processors at best if there are any worthwhile quantum algorithms in those two areas

Video playback is literally nothing compared to rendering vidya graphics.

I didn't say it was, just noting that those two areas are the most computationally expensive ones.
Hence why you see a lot of multimedia instructions being added to CPUs time and time again.

Honestly it just seems they make everything now bloated when technology gets faster.
Im taking about programs and Web pages.

Imagine if they used something that required as much memory/cpu as windblows xp or dial up Internet pages? We could do things crazy fast.

But no let's make an os that requires 4gig of ram and 500mb sized webpages

Umm. Trinary?
Also, why the fuck don't we have it yet.

How come no one mentiones bio computers?

en.m.wikipedia.org/wiki/Biological_computing

Quantum computers/chips will be amazing for AI, 'nuff said.

Oh yes, I need to run a highly sophisticated fucking AI on my laptop that I use for watching YouTube videos and pornography.

honest question here

why dont they make the CPU just bigger?

bigger = hotter
eventually though, once we hit the transistor size limit, we will be making bigger processors

If we can pack down what we could and more for 22nm in 14nm let's say, why don't we pack even more than what we could fit in 14nm in 22nm if that makes sense?

You don't need to, but at some point in the not-distant-enough future it's going to get shoved down your throat anyway.

Indeed. you have have pre watch things and give it a rating based on your preferences, vastly improving your masturbation sessions by filtering out click bait.

>Indeed. you have have pre watch things and give it a rating based on your preferences
YouTube already does this.

...

Diamond and graphene processors are already well into development, so no not only is it not the end of computational progress dare I say its the beginning of monumental leaps and bounds.

in the meantime there's actually still quite alot of architectural tweaks and designs left in silicon so do not worry you'll have more fanboy bullshit to spew in the meantime i assure you.

But yeah the easy path of 'just shrink the die again' is coming to a close. And it's about damned time.

I don't understand what you're trying to say.

>And it's about damned time
Yes, I am also tired of CPUs just werking, I want them to fuck the architecture up.

IF WE CAN PACK DOWN LET'S SAY A BILLION TRANSISTORS IN 22NM AND 1.5 BILLION IN 14NM THEN WHY DON'T WE PACK 2 BILLION IN 22NM

Good. It's not quite breaking free from the shackles of technology, but it's better than blindly Moore's Law-ing toward all-powerful AI gods controlling every aspect of human society, assuming they don't just wipe us out like bugs.

More like Graphmeme

It's the hottest shit in STEM and no one can figure out how to use it.

okay so you are fucking retarded, thank you for clearing this up

quick question: what do you think those nanometer measurements represent?

If you make them larger, you will hit some issues about the speed of light not being able to travel through the all CPU in on clock cycle
Information is not transferred instantaneously

calling bullshit on this one

Quantum non-deterministic computers aren't any more capable than deterministic computers, they just solve certain problems faster. At most it will be akin to a GPU. Never will you run general purpose computing on a quantum machine.

google.com/patents/US6858080
google.com/patents/US5635258

Distance between identical components on the fabricated chip

>long-run investment in R&D for one specific technology has declining marginal rate of return
who'da thunkit?

Because a bigger processor is more expensive to manufacture and has lower IPC which matters to those who play severely CPU crippled vydias that can only run on 1 CPU thread efficiently.

We actually already have a 24 physical core Xeon processor that can run at 2.2 GHz when under maximum load. Has about 3X the performance as a desktop i7 with only a TDP of 165W. Did I mention it costs $7,000+?

ark.intel.com/products/93790/Intel-Xeon-Processor-E7-8890-v4-60M-Cache-2_20-GHz

However like I mentioned these fuckers are god dam expensive. So much that those building servers or supercomputers opt for cheaper 8-core Xeons to put in parallel.

>they're useless for anything other than encryption and quantum physics
>>hurr durr i can read stuff of which i barely understand anything and then start shitposting about it leaving the impression i know what the fuck i'm saying
>>t. Dr. summerfag, future mcdonald employee of the month

>Biological_computing
yea but it doesnt matter if it arrives at its destination according to our frame of reference as long as its time happens relatively the same in its own space-time, and thats not even accounting for mass or gravity

in case you haven't googled it yet, nm measurements within the context of processors is the size of the transistors

Although I disagree, I don't feel the need to argue and only came into this thread to examine the breasts of that woman and blog about it.

Thank you and no thank you at the same time.

Light travels at 3.0e8[m/s] IN VACUUM

Let's say you are running a 4GHz machine.

1/(4e9 Hz) = 0.25e-9 s

0.25e-9[s] * 3.0e8[m/s] = 0.075[m]

That means at most your in vacuum CPU can move a information 7.5[cm] per tick. Now, CPUs are built in silicon, and such light propagates slower. Also traces don't go the way the crow flies. These are back of the envelope calculations, but show that die sizes are and clock speeds are truly limited by how fast light travels.

It's actually worse than user said. Presently modern CPUs take more than a single clock tick to move data long distances. Optical buses may help alleviate that a tiny bit, but there would have to be substantial room for optical couplers and decouplers. Moreover, optical buses are only useful for long distance buses, not short ones.

blow me edgelord...it is about damned time. best work is always done with a difficult dilemma.

this will do nothing but spur advancements as such things usually tend to do. This is not the first time jackasses were wailing about how the cpu cannot progress any further no matter what. This shit happened like 1/2 a dozen times already and every time a way to overcome has been found and implemented.

Now we have the opportunity to explore new things. things are going to get really fucking exciting in the next 5-10 years. You wont be disappointed.

Except for your failure to troll...that will always disappoint.

>parallelization in C
god damn i might just wanna kill myself from reading that phrase alone
handling mutexes/threads in C is a total fucking nightmare

Is there any reason we cant just make processors bigger?

Yes, if you could read the fucking thread.

lol wut. it's the only sane language to do that stuff in.

A brain is much different than a conventional computer.
To give you an idea, a conventional computer requires memory (in which instructions and stored data lies) and a processor (an set of transistors organized as logic gates, designed to be controlled by a certain instruction set) which does the computation.
Whereas in a brain, the neurons do both the memory and the calculation simultaneously. Imagine having no HDD and your processor doing all the work without memory or instructions, and you have a brain.

care to expand a bit on it? only did mutex/threads in java, though i can't see why it'd be such a difference in C

It's a fucking miracle with the huge ass instruction sets we have now that processors work as well as they do.
I have no doubt that we will continue to make computational progress, but switching from transistor shrinking to architecture overhauls is going to be extremely painful.

>read thread
>you're just getting butthurt and calling people stupid over and over

Real fuckin educational. Go jump off a building, faggot.

see Nigga, do you want to pay 7K+ for a fucking processor?

>handling mutexes/threads in C is a total fucking nightmare
You literally lock and unlock a fucking thing, how dumb are you?

literally explained 3 posts up from your stupid image macro you fucking faggot, kill yourself

Why dont these idiots just add more cores and makes the CPUs bigger

ffs

please don't bait with the others.

if by sane you mean it's gives better performance than things like java, then yes

it's not too much different but on top of all the other errors you can run into that the language manages for you in something like Java it can be very difficult

any program getting reasonable parallelization will generally spin off more than 2 or 3 fucking threads, managing them properly so they don't deadlock can be nontrival in any program solving a real problem

>muh triple threads!
So how the fuck do other languages make it any better, besides adding a bunch of retarded constraints that you can add with C?

i usually use c# for this kind of stuff
so I guess my argument boils down to the fact that it's more built in to the language than it is with something like C

use an existing thread pool implementation like glib/ck/openmp has, you don't have to touch that shit just because it's C

Nigga you trolling or what?

NM means the size of the gate grid. Each gate grid can fit 4x4 transistors in it.

Common knowledge

>Only increase in physical volume, that's not advancements.
AFAIU, advancements stopped a while ago already. the latest processors are not much better than those made in ~2008-2010... except they use more power to get more computational power, and minor improvements in the designs to get a bit more juice

Is the singularity a meme?

Why we can't make cpus bigger?

>run AI on quantum computer
>AI replaces "lazy programmers" for the stuff that has to use traditional algorithms

>55904372

>Too expensive to use other types of semiconductors.

what if everyone gets a dumb screen with wifi and access huge mainframes?

>tfw God is watching over us

shhhh

They're listening. That's probably what'll happen in the future.

who is this semon demon?

no, it is not.

People saying it's going to happen within a hundred years are fucking kidding themselves though.

This. Google returns a dead-end.

I want to impregnate those tits

Also, Non-Si transistors

shut up

>hundreds of billions

It does have some really impressive properties and I'm still hoping we find a way to use it and mass produce it efficiently and cheaply in the near future.