For when planck length transistors

For when planck length transistors

Other urls found in this thread:

htwins.net/scale2/
laboratoryequipment.com/news-Allowing-Errors-Makes-Chip-More-Powerful-Efficient-051712.aspx
gizmodo.com/5911399/this-imperfect-processor-is-15-times-more-efficient-than-yours
twitter.com/AnonBabble

(plank time year)^-1

It is impossible to make planck length transistors.

If we are able to make anything "planck length", we should have something better than transistors by now.

Well then.
According to htwins.net/scale2/
the smallest lenght confirmed is 100 attometers

For when attometer transistors?

Never.

Well before then due to quantum mechanics the electrons will just jump through the gates making them worthless.

quantum tunneling is already an issue down at sub 5nm.

It only happens randomly so there is a non 0 chance that they wouldn't jump.

Never. Quantum computing will not be a meme by then and actually take over regular processing applications.

how long is an electron?

>It only happens randomly so there is a non 0 chance that they wouldn't jump.
So each transistor will only work properly some of the time.
Wonderful. Let's put billions of transistors that don't work properly all the time onto a CPU and see what happens.

Idiot.

>laboratoryequipment.com/news-Allowing-Errors-Makes-Chip-More-Powerful-Efficient-051712.aspx

whoops, guess I'll go with the shitty article instead of the source that has moved
gizmodo.com/5911399/this-imperfect-processor-is-15-times-more-efficient-than-yours

>allow errors
>waste 20 times more cpu time to fix them in software
>muh efficient cpu

it's starting to rear its head at 10nm

You don't need to fix the errors, there are plenty of applications that don't need perfect results.

This proves we live in simulation of pixels

kind of now, DNA is made of transistors essentially

future in bio computers with programmable proteins

Using electrons rather than bosons to carry information.

Assuming that string theory is true, then the ability to manipulate information at the planck length scale will be akin to performing mathematical operations on the universe itself. If we as humans can be said to do that, then we will no longer be humans; we will have evolved to a greater cosmic being that has obtained all information and is, therefore, all information itself. At that point, universal entropy will hit its maximum, as all information is in complete disorder, with no way to do any work. And yet, we will be there, the quantum interpolation of consciousness within an infinite plane of uniform energy. Will we then, perhaps, be able to create anew? Will the entire summation of all creation be able to redefine its own being as something meaningful, or will be simply dissipate into a bit, a final zero, to end the line of binary that seemed to stretch on for all of eternity? Only time will tell.

>2160
>still using electrons like a pleb

Quantum tunneling is a phenomenom that enables DNA to compute temporal/spatial requirements of gene expression.
You read it here first folks.

Don't come bitching when your PC gets cancer.
Sure, DNA Polimerase and flow are good at avoiding write errors, but I'd be laughing at information cancer.
No, not data corruption. A mRNA so aberrant, it'll literally bloat your data.

>information cancer

we already have your penis at planck length

And plenty of applications that DO. Imagine if it makes errors during malloc.

And it only improves power consumption.

Power consumption is what generates heat. I assume they're talking about things like floating point math, and it's not like you do malloc with floating points on a gpu

the article mentions that such processors would obviously useless for larger machines but could very well be used for simpler ones that don't require high accuracy. in those cases the lower power consumption could definitely be useful, but it is not the groundbreaking discovery the headline is trying to sell

You do realize that the planck length is smaller than an atom, right?

It would make more sense to implement in a situation where you will have many processors running the same/similar code so that they can basically error check by averaging or majority rule, but this would have to be done as addon processors and not the main processor in the machine, like your desktop CPU.

I hate this interpretation of Planck length. Anything below Planck length is in the quantum-gravity regime, it does make sense. We can't describe it though.

Just trigger a false vacuum and begin anew.

Not physical sense, though. The scale is too small.

>make many power efficient cores
>run the same task on all and use what the most cores get as the correct result

how is that more power efficient that just running on one good core?
efficiency/core is useless if efficiency/task is still shit.

Not all of them obviously. It would depend on the error rate, which could mean as little as three running the same code.
Which they might do anyway as in some cases you might do anyway as it is possible for errors to come through other things than just your processor being designed that way.

I think the more sensible application would be in things like GPUs. You probably won't notice 1 pixel being off in a thousand for 1/60 second, but with that much power saving you could probably do a lot more.

I'm not sure how good an idea that would be. From the article
>and relative errors as high as 7.5 percent still produced discernible images.
The "still produced discernible images" part is what worries me. You can get a lot of corruption and still have a discernible image. You want it to remain indescernible.

muh gaems don't :^)

Positronic computing when?

Excuse me sir but as a competitive gamer who only plays competitive games like le Global Offensive, anything less than 100% computational accuracy is unacceptable. All of my equipment is error corrected and hardended against solar radiation bit flipping

well, such a setup with three cores running common code would have to be 300% more power efficient per core to even *match* a single robust core. Which doesn't seem feasible either way. And then it would still lose on having to use a fair bit more of die area and with that, manufacturing costs.
I can only see it being useful on such error-disregarding fields as video or lossy image encoding and that's it. Which is basically what the article was writing about IIRC (I read that article a long time ago so my memory on it might not be as robust... kek)

>tfw SRBF is a legitimate concern at work

fuck the sun tho

Just install Linux on your DNA, it will die of AIDS and disgust.

you mean qpixels

>2160
>1
>6
>0

>CPU buried under feet of cement
sick gaming rig, bro

>would have to be 300% more power efficient per core
According to the article, they are 7-15x more efficient, meaning 700-1500% more efficient. However it's true that just adding 3x as many processors does have a lot of other costs.