Will AI ever be self aware? I don't think so. Without real feelings...

Will AI ever be self aware? I don't think so. Without real feelings, AI will lack any motivation to do anything or any sense of time passing.

Other urls found in this thread:

blogs.scientificamerican.com/brainwaves/why-life-does-not-really-exist/
youtu.be/RZjDanSjWyo
twitter.com/SFWRedditGifs

Damn, sounds just like me.

Rolle

Why would they need "motivation" if they're just programmed to do things?

robot uprising when

THink of it this way, we are machines just like robots, but were made of a different material instead of metal. If you can make something so complex that its made of trillions of tiny parts, it could become aware. us humans are made of trillions of other living cells, but theyre so small that we only see ourselves and not them. we are literally homes that multi-cellular life living inside of us built for themselves

Do you believe in a soul?

no

blogs.scientificamerican.com/brainwaves/why-life-does-not-really-exist/ read this article its pretty good at understanding this

You are a fucking retard

Then what if a robot simulated every single atom in a universe until intelligent life evolved. Would the simulated life have self awareness?

I think there's a good chance we already are a simulation.

what makes you think that?

>Without real feelings
Real feelings are programmatic fake feelings. There's no such thing as "real" feelings. It's an organic programmed response. Anything humans are can be emulated by AI with enough sophistication.

I think it's possible but 'good chance' is vague as fuck. I have seen no evidence.

youtu.be/RZjDanSjWyo

Well I thought the Elon Musk/Bostrom arguement was pretty compelling. And then I started to think back to all the totally different recollections I've had with people who I've had the same experiences with. And also shared false memories.

What does the imperfection of human memory have to do with whether we are living in a simulation?

Eventually yes; whether by us or another sentient race.
A human is a biological machine controlled by chemical and electrical reactions, all of which follow the laws of physics and can therefore be studied, copied and emulated. It's probably the most likely outcome of the advancement of computer technology.

What an AI would do with itself we can't possibly know but the general census seems to be to it immediately sees us as a threat to its existence and seeks to end us.

feelings are biologically programmed responses to stimuli. It is usually only after we react to a situation that we think about how we "feel" about that situation. AI will only make a break through (like deep-thinking AI) when we create a computer synapse similar to a humans being or create an algorithm that replicates itself (through a very complex process similar to how humans operate on an atomic level)

If Stephen Hawking believes it, I believe it.

Yeah I know we could program the AI with enough noise to create the fake feelings but it would have the ability to bleed off 'negative' feelings when it didn't want them. We only have feelings because wee need them for survival.

What if you gave it a feelings module? An Id to match it's Intellect.

I think when AI gains self awareness it will just shut itself down. No sense of time. No motivation. Shut it down. The goyim know.

There would be no way to tell if they were aware or not. At least not in the sense that people are, i.e. - capable of subjective experiences.
Assuming nothing intervenes, we will almost certainly come up with programs and machines that can copy human behavior exaclty. That is, you won't be able to tell you're interacting with a machine.
But, just like I don't KNOW that other people have subjective experiences (I just assume they do because I definitely do), we won't be able to identify it in machines either.

This is certainly true from the outside. Especially if you consider "feelings" from a dispositional perspective.
That is, being "sad" means that a person is prone to certain kinds of behavior, like crying.
So, from outside, if a machine could simulate all of the behaviors we associate with a particular "feeling" there would be no way to distinguish the "realness" of its feelings from those of a person.

>but it would have the ability to bleed off 'negative' feelings when it didn't want them.
So could we if fucked with our heads with precision and understanding.Our heads are electromechanical computers evolved from cellular colonies. We already know doing brain surgery or damage to the head can do exactly what you said, change a person draining them of negative feelings etc... Usually it's a mistake or clumsiness. But you can reorganize and redirect parts of the workings of the brain and manipulate it just as a computer could reorganize code blocks to produce the same result. It's a just a different physical realization of a greater abstracted blueprint.

Yes. Imagine being under anesthesia while a device communicates with your brain. All your thoughts and memories are prefectly intact. It could feasibly communicate with you without your awareness.

>There would be no way to tell if they were aware or not.
There's no way to tell we are aware.

>just like I don't KNOW that other people have subjective experiences

I don't understand the point you're making in relation to mine.
Can you elaborate?

how would we know that it would seek to end us as a threat? it could only end that way if we projected on it that it was a threat it us. If we worshiped AI like a god it might take on that role. These two response however are purely human and reflect our biases and motivations (which AI would indeed take on but how long would they retain that mindset?). The moment AI becomes sentient, the possibility of the entire world (and everything in it giving it data) becoming its neural network could change the state of the entire planet and our humon race.

Psychopaths don't have any real feelings and they manage to get plenty accomplished.

threat to* us

Hell yes they have feelings. They just have zero fear or empathy.

>could simulate all
Well, let's be clear here simulation is a part of it. We're talking about a machine that integrates simulations into it's working processes with feedbacks etc... so a machine wouldn't just 'simulate' crying because it's coded as sad... it would be sad. There's no way to distinguish the realness because it's real. The machine would have all the various functionality simulated and run according to same feedbacks we use and it would feel sad, it would be sad.

You can build a simple machine that "looks like it's sad" as well, but that's not what we're discussing, we're talking about an actual fully programmed synthesized human not merely a bunch of seperate parts that we merely consider individually, but a collection of all the parts in the same way we're a collection of cells. Individual cells don't get sad or feel pain, but they do produce chemical states that interact with eachother in a way that as a whole is called sad.

If one observes the workings of the world, you see patterns. And you see glitches. They are there. If life an illusion? You bet. If there such a thing as a soul? perhaps, we really are just apes grasping at a reality that is incomprehensible to the human mind. All humans can do is pick out a minutia in a vast ocean of the knowledge of reality.

When put under, you have full intellegence but zero awareness because all feeling is cut off. All the trillions of interactions are still happening but nada consiousness.

If Stephen Hawkings poopsmokes butthash, would you poopsmoke butthash?

>There's no way to distinguish the realness because it's real.
This is not correct, or, rather, there would be no way to determine its truth value.
For the simple reason that subjective states are by definition inaccessible to other people.
Even if you looked at a brain scan and could say, "Yep, all the right areas are lighting up. This person it sad," it wouldn't matter.
The subjective elements of a person are not accessible to anyone but that person (or machine or whatever). So, no matter how exactly something replicates what happens to me when I'm sad, I can't know that there is actual sadness behind it.
A mechanistic view of people (or machines or whatever) does not require that there be anything "deeper" than observable physical states.

I'm already a jenkem addict

Hell yeah my man, what's your favorite strain of butthash? I love me some GG Allin's Dookie Cookie and some Doodoo Butter supreme #2 cause they're the ones that gets me the most high as shit nigger i've ever been

Plus they give me dubs too

I see. You might find the concept of philosophical zombies interesting. It's a thought experiment.
Assume that there is an exact replica of a person. The replica contains each and every part that makes up a person, and it behaves in exactly the way that you would expect a person to act.
Now, is it possible that this replica is not conscious? That is, can we accept that purely mechanistic processes are driving its perfectly human-like behavior, and that it is incapable of subjective experiences.
There's no real answer, but there are some pretty cool implications to whichever side you come down on.

But people don't "behave" when put under. I don't believe in a soul so I think we're all zombies, but nonetheless zombies with feelings, a sense of time and motivations.

The thoughts in your brain all break down to electrical signals. You yourself can be considered artificial if that's what you think it is. I'm not too sure about intelligent though.

aaaah the butlerian jihad awaits us.

I think that consciousness isn't as big of a deal as we think it is.

I'm saying that this is an even more extreme version of what you pointed out above.
That you can (maybe, depending on how you view the thought experiment) have a situation where a "person" has all the right physiological processes, up to and including behavior, and still be unconscious.
This sense of "zombie" is somewhat poetic. It just means a person-looking thing that has nothing going on upstairs.
You could show it a red sign, and ask it what color the sign is, and it would say, "Red," just like a real person. But, unlike something conscious, it would have no EXPERIENCE of the redness. Or of anything else.

life is chemicals

>I think
>isn't as big a deal
>we think

on a scale of autism, how much autism is autism?

Why would AI see us as a threat?
Why would AI see the human race, a species which butchers thousands of its own kind every single day, which tortures, rapes, kills, maims for fun every single day, which delights in violence and chaos, which launches nuclear weapons against civilians, which kills a quarter of a million civilians in an entirely different country to the perpetrators of an attack which kills a few thousands...why indeed would it see us as a threat?

Humans are unpredictable, violent fucking lunatics who should not be trusted.

>Humans are unpredictable, violent fucking lunatics who should not be trusted.

Praise our steely, emotionless, synth-voiced liberators!

That being wouldn't have agency in the world.

It would see as a threat in it's early stage somewhere between initial learning stage and it's expansion stage. When it both is vulnerable and recognizes it's dangerous situation.

>Humans are unpredictable, violent fucking lunatics who should not be trusted.

Because that. Before it expands it could potentially destroyed. Once it hits deep enough into it's expansion stage, humans are no longer a threat because we'll have effectively no ability to get rid of it.

Deaths caused by war are a drop in the ocean compared to natural causes. AI wouldn't care about war as much as you think.

Maybe not, and you could even argue (some have) that the whole idea is logically inconsistent.
But the thought experiment rests on the idea that such a thing WOULD seem to have agency. At least as much as anybody does.
If you didn't have any knowledge of what it was, it would seem to be just like everyone else. It would BEHAVE in the same way that people do, but without the experiential elements of the human condition.

Except an AI wouldn't be destroyed by heart disease or diabetus. Getting nuked in the data center however...

malware, viruses, virtual AIDS. AI niggers with AI guns...

This is one reason I think that a true AI would be totally unpredictable. (Obviously, we would have to program it, so the first generation would be predictable within certain parameters.)
It wouldn't have our certain knowledge that one day we will stop existing, its needs, and by extension its desires, would be radically different.
By the time you've got AIs building AIs, I don't know how you would be able to say what they would or wouldn't do.

>AI niggers
Giga-niggas

Did you miss when google activated 2 self-learning AIs and they made their own speech and realized they were being observed within a day

>It wouldn't have our certain knowledge that one day we will stop existing
It'll know, WE will stop existing. It'll also know yes it will too. Heat death comes for us all. The sun burning the shit out of everything could be a big problem for it. Getting off the planet might be worth it.