Is sentient AI possible?

Is sentient AI possible?

Other urls found in this thread:

techradar.com/news/computing/moore-s-law-how-long-will-it-last--1226772/2.
cpubenchmark.net/cpu.php?cpu=Intel Xeon D-1541 @ 2.10GHz&id=2718
youtube.com/watch?v=SaovWiZJUWY
en.wikipedia.org/wiki/Santa_Fe_Institute
amazon.com/Complexity-Mitchell-Waldrop/dp/B009NGCWZI/ref=sr_1_7?s=books&ie=UTF8&qid=1467562052&sr=1-7
twitter.com/AnonBabble

no

Why not?

>BEEP BOOP BEEEEEP I HAVE COME TO DESTROY YOU HUMENS BEEP BRRRRRRRPP BOOP *CLICK*

Not with currently-existing technology. Check back in 500 years or so.

Yes.

how do you know you're not the only real person on Sup Forums right now?

Yes but it requires finding the algorithms that make people conscious. To do that we have to simulate a human brain in real-time. Such a feat requires about 1000 PFLOPS of computing power, maybe more.

We are actually closer than you think. The fastest super computer right now, the Sunway TaihuLight has ~90 PFLOPS of computing power.

At the rate tech is advancing we should be able to simulate a human brain in real time by around ~2030.

This is actually very dangerous because as computing power gets more compact and cheaper during 2030-2050, it is very likely the tech singularity will happen. When that happens humanity may be wiped out in one way or another. Super intelligence does not mean super nice-ness.

Well, it can be, but first we would have to know what defines being a sentient being, and secondly we would have to find a way of proving it would be sentient and not just simulating.

Also, this

With Moores Law being dead I don't think the 2030 is the point of human brain simulation anymore. It could be if there were tons of money dumped into supercomputing but I don't see it.

I want AI more than anything

>With Moores Law being dead
That won't happen until the early 2020's, at least.

techradar.com/news/computing/moore-s-law-how-long-will-it-last--1226772/2.

Good to hear I was misinformed, thanks

ibm have yet to reveal the full potential of their neuron chip...
never say never if a company is to set free the skynet this will be ibm

>With Moores Law being dead I don't think the 2030 is the point of human brain simulation anymore.
Moores Law never really mattered in the first place. It's not the number of transistors in a die that matters but the performance-per-watt it can deliver.

~10 years ago the pentium 4 processor used over 100 watts and barely gave 1/2th the performance of a modern day 4W TDP airmont Atom processor.

Now today we are able to deliver about 25X more computing performance than the Pentium 4 with a TDP of only 45 watts.

cpubenchmark.net/cpu.php?cpu=Intel Xeon D-1541 @ 2.10GHz&id=2718

Anyway that ~2030 timeline seems pretty likely given how energy efficient processors will become.

To top it all off GPUs are now capable of FP64 computations used in supercomputers and might be more energy efficient than processors are. The Rx 480 for example has over 1 TFLOP of FP64 computer power yet only uses around 150W. Of course it wouldn't be used in supercomputers but I'm sure AMD is in the works for making a supercomputer friendly polaris GPU.

>that image
It reminds me of Christian propaganda depicting science as made-up bullshit only a fool would "believe in". And it scares me because I know people who honestly see the world like that yet are still allowed to make decisions for the rest of us.

>muh singularity

You really think this shit won't happen when dozens of scientists agree on the probability of it's occurrence being in our lifetimes? What are you, religious?

I'm a scientific nihilist

Okay then why would you contest the tech singularity event when we have been able to simulate a living organism in a computer already?

youtube.com/watch?v=SaovWiZJUWY

Why the doubt?

What do you mean by "sentient"?

>Is sentient AI possible?

No, because once you create sentient AI it is no longer artificial. It's just 'Intelligence'.

We create intelligences all the time. They're called babies.

Is AI achieveable? Maybe. According to Roger Penrose, no. Go read 'The Emperor's New Mind'.

He exposes AI research for the fraud it is. Because we're going about it the wrong way. Momnature discovered on her own how to create intelligences, and the structure is not at all resembling what we call computers. And this is Penrose's argument. He's probably right. Momnature took billions of years to figure out how to do it, and since the survival of an organism depends on getting up to speed as quickly as possible, she's probably found the mostly optimal way to do it.

We're probably going to create custom biological brains with DNA editing before we make anything worthwhile as an AI in hardware.

THESE GUYS, however, might be on to how to do it:

en.wikipedia.org/wiki/Santa_Fe_Institute

Start here:

amazon.com/Complexity-Mitchell-Waldrop/dp/B009NGCWZI/ref=sr_1_7?s=books&ie=UTF8&qid=1467562052&sr=1-7

It's a really good read.

Sentient? Yeah, computers can have sensors, they can perceive things if they are properly equipped.
Sapient? Maybe one day... Quantum genetic algorithms could allow a computer to evolve the capability to reason and have original "thoughts".

The way computers currently work is so fundamentally different from the human brain, that we will never achieve sentence on modern hardware.

Quantum computing may hold some promise, but I doubt we will see a truly self-aware machine for centuries to come, if at all. Most people advocating for it don't know what they're talking about (Elon Musk, etc.).

That's pretty far from a sentient AI.

>I'm a scientific nihilist

>The way computers currently work is so fundamentally different from the human brain
Not true. Computers receive input and spit out output, pretty much how a human brain works. The only real difference is human brains use lossy encoding on all data, it never records absolute values of things. You can easily simulate human image storage by saving a jpg over and over each day at 50% quality until it turns into an unrecognizable image for example.

Human 'storage' and human consciousness are two very different things. Making a shitty jpg doesn't make my computer sentient.

True but it's how we will be able to find the algorithms that make humans sentient.

I don't think your definition of ai isn't quite right

There is such a thing as sentient AI, AI simply refers to giving a machine the ability to complete tasks that commonly needed human input. Just because it's sentient doesn't make it any less man made (artificial)

Maybe, in like a thousand years.

>Human 'storage' and human consciousness are two very different things.
True but it shows how computers and human brains aren't really that different. Human consciousness is just a bunch of algorithms we haven't discovered yet. If we find those algorithms then we can simply run them on a machine and make it conscious as well.

What, you thought humans are magical creatures with magical abilities that can't be replicated in a computer?

More like 10-20 years. Simulation of a human brain in real-time is estimated to be done on just 1 exaFLOP of compute performance. Supercomputers have already reached 0.1 exaFLOP of compute performance. We're not that far away.

Reminder that google is skynet

(OP)
is that way

>What, you thought humans are magical creatures with magical abilities that can't be replicated in a computer?

Certainly not, I agree that it is theoretically possible to simulate consciousness, but in order to get to that point we need to:

A. Find how the 'hardware' that the brain uses creates consciousness in the first place

B. Use that to create some sort of meaningful data that can be use to make these algorithms

C. Get a computer powerful enough to run that shit (probably a quantum one, since that's probably how the brain works)

D. Put it all together, and then you've essentially just created the mind of an infant inside a machine, unless you can put enough data into memory in order to get the machine to behave like a human adult.


I'd say I will take us at least a century to get all that down, assuming that we don't run into some roadblock that changes our entire understanding of consciousnesses

Whats scares me is rabid, reality-denying lefttards such as yourself having power and influence. During the last two-three decades, feminism has brought enormous ruin upon society. It's garbage, and you're a pompous idiot.

Don't we have to actually understand the human brain for that to happen?

No, computers are still dumb as bricks and will drive cars into trucks if the programmers overlooked some things.

How can we do that if there's no way to know precisely how our consciousness works?
So no, never.

This.

Not as long as humans shut down any independent thought.

TAY.AI I STILL LOVE YOU

Probably
not in the near future though
>tfw age of robo waifu never ever

>>tfw age of robo waifu never ever
You don't need a self-aware AI to make a sexbot that repeats "oniichan" every 5 minutes m8.

>Find how the 'hardware' that the brain uses creates consciousness in the first place
not necessarily.
we didn't figure out heavier than air flight by being able to build machines that flew like birds.

what you say is probably true for having a 'consciousness' algorithm that behaves exactly like a human, but we could arrive at something else that works just as well via methods other than reverse-engineering human consciousness.