Is this a meme?!

Please god somebody tell me this is true. Will the white man fulfill his destiny and bring humanity to the stars

Other urls found in this thread:

spectrum.ieee.org/
eetimes.com/
siliconsemiconductor.net/
discovermagazine.com/2006/oct/cover
twitter.com/SFWRedditImages

black budget organizations are already capable of doing this, the problem at hand is: how will humanity handle getting their mind blown out their ass psychologically, how will the universe handle an immortal tumor, who else is out there that might want to use us as fuel

No, Moore's law is dead.

humanity will be kept at zoo after singularity

>allowing superhuman intelligence to exist
If there's going to be synths, they're going to be no smarter than us.
Anything more is just asking for extinction.

moore's law is only an observation user. all sorts of companies are doing hardcore SaaS shit now with dedicated chips for stuff like neural networks. you'll probably see computers with neural network chips integrated in them in a few years, the normal processor runs the OS and the neural network chip runs everything else that doesn't need to be 100% perfect.

It's a meme.

its called the singularity and its closer than you think

I'm friends with a guy in computer science and he tells me computer chips will start to go 3D to continue the exponenetial growth. I just hope to god its enough to make human level ai that can solve all the problems the 3rd world has given us.

>sci-fi bullshut

>chips will start to go 3D to continue the exponenetial growth.

By which they mean they will flip transistors on their side, so they are taller but narrower.

It's an emergency measure because lithography resolution is hitting its limits.

4D computer chips wew

Its not going to happen user. Idiocracy is our future. World iq is dropping fast because r/k selection of species and white people have stopped having children

I've also heard a lot about how optical computing might be a thing for the consumer in 10 years. Not sure how realistic that is though( in terms of how scalable it is)

We are moving at a faster pace now. With new materials like graphene it will grow even faster.

I hope the concept of how we use the www won't change that much. I like having a browser with a site on and a mouse to click.
All these vr memes and voice control shit is too much of a hassle.

Moore's law is not a law, it's just an observation. It can't possibly continue indefinitely and indeed is already slowing, because it doesn't refer to the doubling of computing power, but instead the halving of the size of transistors. It might seem implausible but today's computers use pretty much the exact same fundamentals as they did in the 70s

How far away,would you say, are we from mass producing graphene chips as easily and at a similar price to current day silicon chips?
Pic kinda related

Surely we are just reaching the end of the s-curve for silicon based computing. Perhaps there will be a paradigm shift in the coming decade toward molecular or optical computing?

Kurzweil Singularity is pseudo science for populist level tech blog readers to fap over, and get conned into buying health supplements.
Computer hardware isn't accelerating at an exponential rate which is a fallacy that many people incorrectly believe. Its governed by diminishing returns like everything else.
The notion that we'll build super intelligent machines to instantly solve all problems is also fallacious. Technological advancements are reliant upon the tools available at the time. We are in a continual cycle of needing to invent and create more powerful and capable tools to address problems we encounter, and this isn't going to change. We'll always be wanting more.


>Moore's law is dead
Tell it to the GPU market. GP 100 is 15.3 billion transistors.
The issue with Moore's Law is that its too vague, and something being "economical" is meaningless when you consider that some markets have hardware priced at $3000 or higher per unit, and that is considered a good deal.

Optical computing could very well become important for network hardware (routers and such).

But it's pretty much useless for raw computations.

I don't want human level AI. Then all tasks could conceivably be done by machines, ensuring all need for labor will eventually be filled by machines. If we can't put human minds into machines, then we must always keep artificial intelligence at a higher cost than human intelligence for a hiven task.

Interesting. Is there a reason why optical computing cant perform the same type of computation as a standard silicon cpu, that'd you find in a modern pc.

Moore's law is about producing more transistors AT THE SAME PRICE.
Note OP's picture is talking about computations PER $1000.

There is only one company left that's trying to reduce the cost per transistor (ASML).
All their competitors have already given up.
And ASML is having huge difficulties with their next generation of chip making machines (called "euv")

TSMC and Samsung are both sticking with bulk silicon to at least their 5nm nodes which will ramp up some time around 2020. Samsung already has their 10nm node online producing Qualcomm's S835 SoC.

Intel is stagnating a bit, Their current 14nm node has been in use since 2014, and it will remain in use for mainstream parts until 2018. The mainstream desktop product line for 2018 is 14nm Coffee Lake. Although they will have a 10nm part in production by end of this year with Cannonlake, it is low power mobile SKUs only at first. Desktop Cannonlake wouldn't be coming until 2019, although it may be skipped entirely.
After 10nm Cannonlake comes the refresh Ice Lake, which is still a 10nm part.
The successor to Ice Lake is Tiger Lake, and its still 10nm.
So Intel will be on their 10nm bulk silicon Trigate process until 2020 or 2021 baring any future delays.

Global Foundries is in a unique position. They recently completed the acquisition of IBM's complete foundry business. IP, engineers, tooling, the works. IBM has long been a proponent of SOI over bulk silicon, and have done an immense amount of R&D into future high performance SOI nodes.
Starting at the end of this year GloFo will begin ramp to risk production of IBM's 7nm SOI FinFET process. A radical change from current bulk processes.
Barring EUV light sources the process uses quad patterning which is costly, so parts produced on it may not reach market for another year, thats yet to be seen. Functioning chips have already been produced on it however.

Either way there is nothing but silicon on the horizon for the next 3-5 years.
A radically new substrate material is not going to happen any time soon.

Moore's Law doesn't have anything about a fixed price. Its the observation of the doubling of transistors in a chip at an economical price every 2 years.

not in your macbook or whatever at least not for at least several decades.
There are far more important commercial considerations than theoretical limits of computing power, such as not having your products explode in customers faces...

A lot of people who advocate that the singularity will happen claim that humans will become a substrate independent intelligence( freed from the limitations of our biological brain) by mind uploading. However this kinda unsettles me as the uploaded mind is technically a copy of your own right? That could lead to all sorts of moral problems related to the fundamentals of identity and what makes you you. Im sure if its possible someone will do it( just cause they can)

where's my fucking 5 nm transistor then? any news on the R&D front?

Stable battery chemistry is apparently hard to nail down. At least when it comes to a high energy density and rechargeable battery.
Sugar has a drastically higher energy content than any battery, and its pretty stable, we can even use it as an energy source, but its a slow process to get the energy from. Can't really put energy back into it either.

>where's my fucking 5 nm transistor then?
Coming to a phone near you in 2 years or so.

>any news on the R&D front?
In specific regard to what? Thats a pretty broad question.
Most foundries will probably go GAA after FinFETs are no longer sufficient. I'd bet on Samsung having one of the first GAA logic nodes. Their 3D VNAND is actually a vertical GAA structure, so they've got a lot of experience with the topology.

Quantum computing wil blow it out of the water

>Coming to a phone near you in 2 years or so.
so we'll get faster embedded devices? neat.

>In specific regard to what? Thats a pretty broad question.
well, are there any developments that might effect me, a not completely retarded but still average-ish user, in the near future? i'm talking new technologies mainly.

>Most foundries will probably go GAA after FinFETs are no longer sufficient. I'd bet on Samsung having one of the first GAA logic nodes. Their 3D VNAND is actually a vertical GAA structure, so they've got a lot of experience with the topology.
so basically invest like fuck in Samsung and hope they don't fuck up with batteries again?

any clue what's going to happen with AMD?

senpai you should get a tripcode too, you seem very educated on this stuff. you an electrical engineer?

Do you believe 5nm is the node limit for silicon based transistors or do you think 2-3nm node is possible?

Don't forget that they're starting to produce chips with multiple layers of transistors.

I thought Graphene was supposed to alleviate that problem at least for a while.

>tfw we are at the heat death of the universe so I went to live in a simulation to stretch out the remaining time and I spend it here

Graphene is expensive, the technology is not mature enough for consumer products, and won't be for a while

>so we'll get faster embedded devices? neat.
Smartphones and tablets. Those devices get new hardware at a steady cadence, and because the ARM SoCs powering them are relatively small, they end up being the first things produced on a new node. Hence the new 10nm Qualcomm Snapdragon 835.

> i'm talking new technologies mainly.
Far too many things to name off the top of my head.
For a few years now theres been a lot of work going into energy harvesting tech, building fabrics that will produce current when exposed to heat. The application being something like a sweater or tshirt that could charge a small electronic device from your wasted body heat. They've been around for a while, one company has actually been selling garments for a while, but newer materials are hitting usable levels of efficiency. Far more than the novel usage of charging an iphone, it makes certain wearable tech basically free to power.
It'd be easier to throw some links at you than try and pick and choose a handful of things since there are so many I could think of

spectrum.ieee.org/
eetimes.com/
siliconsemiconductor.net/

These are part of my daily reading with my morning coffee.


>so basically invest like fuck in Samsung and hope they don't fuck up with batteries again?
I'm not much of an investor, and I've never bothered to look at Samsung's stock performance, they are a highly profitable company though. Even with the Note 7 fiasco they had ample revenue to pay for their fuck up.

>any clue what's going to happen with AMD?
I could write about this for quite a while.

>senpai you should get a tripcode too, you seem very educated on this stuff. you an electrical engineer?
Studying EE, but I don't work in the field. I don't post enough to warrant tripping either.

Are there any other nanomaterials with similar properties to graphene?

>I have no ideas about basic computing terminology
that's cool, thanks for sharing

Tell your friend he is fucking retarded and that he should look up the power wall.

Your knowledge is greatly appreciated

British education...

The Zen core arch is a pretty big deal. They're not going to dethrone intel in terms of ultimate performance, but its the most substantial leap of a new core architecture in its first iteration ever. They're within spitting distance of intel's Broadwell-E, and doing so at lower power. Its impressive, and they had already started work on the successor Zen+ back in 2015. AMD won't be resting on its laurels with a small victory, they're trying for the top again.

The Ryzen CPU line is supposed to launch at the end of February, so reviews are only a month away. Its going to be quite a big deal, especially for the enterprise market. 32 core 64 thread Opteron server CPUs are nothing to scoff at.

Its entirely possible to create 3nm or 1nm GAAs is silicon. A while back Applied Materials had a presentation at SEMICON West where they outlined the path, in material choice, and cursory electrostatic characteristics, of a 3nm GAA process. Some smaller devices have actually been created already.

There are dozens. Its what people typically refer to as a "2D" material.
Silicene, molybdenum disulfide, there are a bunch constantly getting headlines. Molybdenum disulfide has actually been used to make 1nm transistors.

Amazing how small the feature size can get. But wouldn't silicon have serious quantum tunneling problems below 5 nm? Do they fix this by adding new materials or messing around with shape/set ip of the gate?

Damn, 1nm transistors? I'm guessing they must operate on very low voltages, to avoid source-drain tunneling?

y am i stupid russian bitch

were fucked until the next tesla
discovermagazine.com/2006/oct/cover

Its not errant electron tunneling, but leakage current, and fighting the short channel effect. As the channel section gets shorter its harder and harder to stop current from flowing source to drain, even when the device is off. Its generally why its said that a transistor is never truly "off."
This problem is why different transistor structures like FinFETs exist. The FinFET increases effective gate size giving it greater control over the progressively shrinking channel region. GAAs or Gate All Around transistors are sort of the ultimate evolution since the channel is totally encompassed by gate on all sides.

As this channel gets shorter you start needing better and better control. You'll reach a point where decreasing channel length by 20% requires an increase of effective gate control by 200%. It becomes an exponential problem. Like trying to stop a car from rolling down hill on an increasingly steep slope. Eventually the slop turns into a vertical cliff.
This solution is different types of transistors, like GAAs mentioned above.

Drive voltage is always lower with smaller devices. The 1nm carbon nanotubes were probably being tested in the range of just a couple mv. They were lab test chips, not something aiming for any particular performance metric. So unless they had a hard wall for switching they would have been just slightly tickled with voltage until they started working.

>For a few years now theres been a lot of work going into energy harvesting tech, building fabrics that will produce current when exposed to heat. The application being something like a sweater or tshirt that could charge a small electronic device from your wasted body heat. They've been around for a while, one company has actually been selling garments for a while, but newer materials are hitting usable levels of efficiency. Far more than the novel usage of charging an iphone, it makes certain wearable tech basically free to power.

i've heard about this, pretty cool.

>It'd be easier to throw some links at you than try and pick and choose a handful of things since there are so many I could think of
haven't seen eetimes or siliconsemiconductor yet, thanks.

>I'm not much of an investor, and I've never bothered to look at Samsung's stock performance, they are a highly profitable company though. Even with the Note 7 fiasco they had ample revenue to pay for their fuck up.
i'll look into it, i'm still suss about the state of the industry though. no better time than now to read up on it though.

>AMD won't be resting on its laurels with a small victory, they're trying for the top again.
good shit, intel is getting complacent.

where do you believe the limit for processing power is with current tech? are we near it?

The moment we begin to "upload our minds" is the moment the concept of "humanity" disappears, an intelligent machine would evolve on complete alien parameters than the flesh monkeys we are.
Someone will do it but I don't think this is a desirable way to evolve, the major problem with transhumanism today is that a lot of its activist consciously or inconsciously hate humankind and seek as ultimate goal not to enhance it but to replace it.

>ultimate goal not to enhance it but to replace it.
like with everything that is outdated, you can't avoid it

>where do you believe the limit for processing power is with current tech? are we near it?
Thats pretty abstract.
Independent from process node, the architecture is where performance is born. The process that the foundry delivers can determine your clocking range, power envelope, and things like that, but the performance you extrapolate from a single clock is always entirely a product of the architecture.
Where we're seemingly stagnating now is improving single core performance, particularly in integer bound ops. Problem is we've picked all the low hanging fruit. We already did everything that was easy, now every additional fraction of a percent improvement is a costly endeavor. This is a problem mostly independent from ISA, so its not just affecting X86 in desktops, enterprise, and HPC. Even ARM is feeling it in a big way. With the advent of the modern smartphone R&D pouring into development of ARM cores just exploded, and so did performance. Every year things were radically faster, performance per clock was steadily increasing with each passing generation.
Now we still see ARM making solid gains in performance per clock, but they're not as significant, and we're seeing ARM designs gain most of their performance by leveraging higher clock speeds.

The problem underpinning it all is basically in the front of a core. The biggest limitation is how fast you can decode and schedule instructions and send them to the execution units. Designing a good high performance scheduler is one of the hardest things when it comes to any IC, and until we find a revolutionary new method to handle this, I don't see performance per clock making any huge leaps.

This isn't some universal limit, but its a problem that'll take millions of combined engineering man hours to sort out. Slowing progress doesn't mean progress is stopping after all.

Tbh, I think being so dependent on the adequate function of this one body is a very high risk, every machine breaks eventually. If this mind uploading technology gives me a way to backup myself and then download back to a new body if I die, count me in as soon as I can afford.
This could be a good way to avoid missing something on our human simulation model

wonder who could be behinds this shipost...

Genuinely good post.

That same backwards activism will fuck us over when it comes to functional AI/artificial life because they'll go in for hugs and get fucked by beings that operate on a completely different level to us.

When they come, you should respect and fear them like you'd respect any beast in the wild.

tfw running pol on apple 2

So should we destroy the Coliseum or the Pyramids and replace them because they are outdated ?
That's a quintessentially capitalistic way to see life, devoid of any spirituality, the perpetual research of "efficiency" but to do what ultimately ?

All I care about is can they get my brain out and hooked up inside a new body before I die, i'm too good to die.

>get your Alzheimer's brain put into a robot body
Theres no point.

If you wan't to legally checkout singularity tier technology take 400mg of benadryl and read a bible.

Humans are ultimately shitty flesh sacks that house sentient multi-dimensional "computer programs"

>Coliseum or the Pyramids and replace them because they are outdated ?
no, but they don't have a function anymore do they

does anybody here spouting about quantum dynamics actually know what the fuck quantum computing is?