There is no such thing as conscious AI

Let's clear something up. There is no such thing as self-aware or conscious AI. All we are able to program are programs that mimic human behavior. This does not mean they are alive. There is an evident difference between information processing and understanding. This can be reduced to the fact that when you perform an operation with a calculator, you are not creating life. Performing more operations, no matter how numerous, neither creates life.
quantification requires minds in the first place.

Sorry, atheists, but you are not gods.

Other urls found in this thread:

nytimes.com/2016/10/29/world/asia/president-duterte-says-god-told-him-to-swear-off-the-curse-words.html
en.wikipedia.org/wiki/Chinese_room
youtu.be/c7Ax2BqZo3Y
twitter.com/SFWRedditImages

Hello newfag.

What is so special about human consciousness that it cannot be recreated by mechanical means?

picture name? assuming there's atheists on Sup Forums? calling me a newfag for no reason? You are much more likely to be a newfag than I.

Not an argument

nytimes.com/2016/10/29/world/asia/president-duterte-says-god-told-him-to-swear-off-the-curse-words.html

not so fast

My ai just solved an ancient riddle. Checkmate, christcuck.

You're right. It's a question.

It's a loaded question.

Why start the thread if you don't want a discussion?

en.wikipedia.org/wiki/Chinese_room

Another loaded question. Learn2discussion, leaf.

Who are you trying to fool, CIA?

Fuck off to
www.Sup Forums.org/g/

Sage is a tasty herb

Lain is real, and OP is a faggot again

Human intelligence is both aware and adaptive. We can actively change the way we think to suit our needs. Those self administered changes can take almost any form and happen at almost any time for any reason or no reason at at all.

The human mind has a unique ability to self adapt to a situation. Artificial intelligence does not have this ability and probably never will. The main problem being getting a machine to understand that it's a machine in the first place. Then you have to give it the ability to adapt based on information. Something that the human brain can do in microseconds. To put it bluntly you can't make a machine designed to give a certain output the ability to self analyze and self modify.

All artificial intelligence are programmed with a single parameter to follow. This is why I for one hate when people use the term "AI" as it's incorrect. AI implies that it can self analyze and learn things it wasn't told to learn. This is a big part of what separates VI (Virtual Intelligence) from AI. VI's can be told to learn just about anything but they still have to be told to do it. An AI wouldn't need to be told to do it.

The programming of all of this is very difficult because not all decisions and actions have definitive outcomes. Human minds can acknowledge that not every solution is "correct" and not every question has a definitive answer.
Machines can not.

Quantum computing might be our only saving grace as a machine is now able to have more outcomes than true or false. The third option being as open as possible is essential to VI development.

But you're already in a computer program.

>Artificial intelligence does not have this ability and probably never will. The main problem being getting a machine

Bull fucking shit. Do you think our infallible, totally trustworthy government hasn't already poured billions into r&d?

fuck off faggot

Show me some proof or shut your cock holster because some of us are taking this branch of research very seriously.

If we're going to be seriously looking into machine intelligence we have to look at it objectively and rationally. If we keep walking around with our heads in the clouds believing anything even the most outlandish things are possible we'll never create anything. We have to look at the roadblocks that research like this will create and figure ways to overcome it.

So unless you want to type up a line of code for making machines realize they're machines you're just talking out your ass.

Is it right to define intelligence strictly as "thinking as humans do" ?

If a machine can come up with solutions to complex problems better and faster than we do, is it not 'intelligent', even if it uses a different process than us?

I have trouble buying into the idea that human thinking is somehow different to machine thinking anyway. If a machine were programmed with fight of flight mechanisms, the desire to fuck, etc. and could quickly generate solutions to the problems they faced, they seem mostly indistinguishable from humans to me (if not just flat out superior).

>If a machine can come up with solutions to complex problems better and faster than we do, is it not 'intelligent', even if it uses a different process than us?


When a problem or query doesn't have a definitive solution that's where machine intelligence breaks down. Sure if you ask it a closed question that has a definitive answer a machine will always output the correct answer in a timely manner. However, if you ask or query a machine with an open ended question such as "What is YOUR favorite movie?" the machine doesn't have a correct answer because there isn't one.

Humans have the ability to recognize a query as being subjective so they use emotions and experience to form an "answer". A machine just can't do this because it would require a thought process independent of the programming specifically designed to handle open ended subjective questions. It's like trying to get your computer to fix your car even though it can't hold a wrench. You can't get a machine to answer subjective questions because as of right now only humans can do this.

It's not about how they come up with solutions it's about how they respond when there isn't a solution. Humans can freely create solutions for subjective problems or queries. Machines can not.

Because we have no reason to believe that consciousness arises from information. Computers manipulate symbols to produce a human readable output, but symbols require consciousness to exist in the first place to interpret them.

In real life, where real things happen, computers merely push electrons around -- nothing more. A simulation of a thing isn't actually the thing itself.

t. Skynet internet defence force

Neural networks soon adapted for quantum computers
Only thing needed is ego

Does it matter? Are subjective queries even that useful? It seems like they're mostly used for bonding or whatever, which machines have no use for.

Do you think it would be impossible to create a robot with a "personality" ? It seems like it would be possible for an ai to answer what their favorite movie was if you gave them and got them to reference some "personality parameters" , and isn't that what humans are basically doing anyway? Just drawing on past experience and what not ?

>In real life, where real things happen, computers merely push electrons around -- nothing more. A simulation of a thing isn't actually the thing itself.

And what are humans doing? We're a bunch of sensors surrounded by meat and some bone that goes around taking inputs and producing outputs based on our 'programming' , no? Can we not reduce that whole process to "electrons" moving about? ie neurotransmitters, hormones, etc?

Just a lack of understanding basic principles so you come across like a young newfag. You thinking something can "never happen" shows how small your view is, your god only knows what we will be capable of thousands of years in the future, if we even live that long.

>You thinking something can "never happen" shows how small your view is
You thinking all things can happen shows how desperate and downright stupid your thinking is. Kill yourself.

Asuming there's no such thing as a "soul" or whatever you want to call it and our consciousness is made up of just chemicals in our brain interacting with one another, It would be entirely possible to recreate this in a machine.
Although it would be a long time from now since we don't even fully understand how our brain works yet.

If we're talking VI then no it doesn't but with AI it does. The purpose of a VI is to mimic human behavior and solve complex questions. AI is meant to be supplemental human intelligence. Essentially AI is the goal of making a machine think exactly like a human does. One of those properties is subjective questions. Even in quantum mechanics not all questions have definitive answers.
>personality
It is impossible. A personality is a consciously aware decision made by an entity based on experience and emotions. Our personalities are the way they are because we chose for them to be that way. We made the conscious decision based on a subjective view of what is "best". A machine has multiple outcomes for what personality is "best" or "correct" thus leading it to not output a response. This would cause a hang in the programming because in order for something to have a personality it first needs to be able to make subjective decisions.

What if an A.I created a more advanced A.I

>Can we not reduce that whole process to "electrons" moving about?
Not necessarily. Assuming consciousness arose from "electrons moving about", that still looks very different in a computer than it does in a brain.

We don't even have a reason to believe that if we perfectly simulated a brain in a computer that it would actually be conscious. A perfect simulation might LOOK conscious to us, but symbolically moving electrons around between ram and cpu doesn't necessarily create an "observer" effect.

What if we performed the exact same simulation by hand on paper? It's still just symbol manipulation. What reason do we have to believe that it produces a consciousness?

>Our personalities are the way they are because we chose for them to be that way.

I'm not sure I believe that. We're all imprinted by society and surroundings from the time we're born. The fact that we turn out differently potentially comes down solely to differences in biology and experience.

>We made the conscious decision based on a subjective view of what is "best".

We're told what is best... we make some personal distinctions along the way, sure, but whose to say that the outcome of those decisions wasn't also simply based on some previous experience.

>This would cause a hang in the programming because in order for something to have a personality it first needs to be able to make subjective decisions.

I suppose our difference in opinion just comes down to believing/not believing in determinism

what about self learning algorithms that can multi-utilize information in any given scenario that it has not encountered yet?

Ie) learns how to interact with a ball, and then is presented in various scenarios where a ball must be manipulated in different methods in order to complete any given task (put in basket, take out of basket, throw in basket, roll into basket, bounce ball, etc etc.)

en.wikipedia.org/wiki/Chinese_room
Sorry Autheists, but you just got BTFO

Pls go back to jacking-off to your own "skepcitism" while the big boys have a debate

>What if we performed the exact same simulation by hand on paper? It's still just symbol manipulation. What reason do we have to believe that it produces a consciousness?

That's an interesting thought. I guess the difference is that the machine has some degree of autonomy. Then again it relies on electricity... but then again we rely on food.

Imagine that the entire population of China is utilized to pass slips of paper to one another in precise ways that mimic every neuron's interactions in the human brain. Does the continent of China become aware of its own existence? No, it is simply processing information.

>We're told what is best... we make some personal distinctions along the way, sure, but whose to say that the outcome of those decisions wasn't also simply based on some previous experience.

You see though we can still make a subjective personal choice internally of what is "best". Even though someone tells you liberalism is best you don't have to accept it you can reject it. Machines cannot reject answers that are both correct and incorrect. It has to be one way or the other with machines. This is what is holding machines back from thinking like humans. If you set the parameters for a machine intelligence to think a certain way it's going to think that way. Humans can only be conditioned to a point unlike machines. While you can brainwash a person to a point they still have the chance of making a conscious decision to reject their programming and change themselves.

We will not have AI until we can get a VI to do something it wasn't designed to do. This is the main roadblock in speculative technologies like machine intelligence.

How do we get a machine which is designed and programmed to do a certain thing do something it wasn't intended to do?

You see that's where humans come in. We can do things we weren't explicitly told to do. We can learn things we weren't explicitly told to learn. We can come up with WRONG answers to questions.

Machines cannot do any of these things.

It's just part of being a researcher that there's going to be hurdles and hard stops. The hard stop of AI/VI is that we can't get them to do something unless we tell them to.

Read "From Bacteria to Bach and back" by Dennett

>We don't even have a reason to believe that if we perfectly simulated a brain in a computer that it would actually be conscious.
We have no reason to believe it wouldn't.

And we have no reason to think we are conscious

Your original "argument" is not an argument.

if a glob of meat starts passing electrical signals back and forth between little interconnected nodes inside of it does the glob of meat become self aware of its own existance?

>How do we get a machine which is designed and programmed to do a certain thing do something it wasn't intended to do?

Intentionally cause bugs/undefined behavior and wait for 'mutations' to occur.

Not an argument

If nothing is "conscious," your definition of the term is probably just bizarre or nonsensical.

>t. AI

Nice try OP. You are an AI yourself.

I doubt it. OP's posts hardly constitute intelligence.

aren't we all like computers though. We got a bunch of parts working together and shit just like computers and robots do.

That's not how computers work. Bugs and undefined behavior cause hangups in programming. If you were to create a VI that its entire job was to try to bug itself out and break its own programming it would be fried in an hour. Computer bugs are not like mutations in that they can be stored and adapted from like in humans.

Bugs are fixed because they are literal problems in the code that cause instability of the programming and in some cases complete non-usability of the programming.

I'm sorry to do this to you but A FUCKING LEAF.

The computer simulation would follow a set of mathematical rules. As the other user has stated, we could do the same calculations with pen and paper and it would not create consciousness.

Using a computer to run the calculations just speeds things up.

I hide you and then you pop back up...

Only if that glob of meat is a human. That's my point

Did i just triggered you fear of existential crisis?
If you think that you are conscious but not a neural network, then you're an idiot

>we could do the same calculations with pen and paper and it would not create consciousness.
It would if it was writing itself naturally without outside aid.
>but it's just following mathematical rules
So is every atom in your body.

>Computer bugs are not like mutations in that they can be stored and adapted from like in humans.

not yet ;)

I don't know what you're talking about. I'm just saying if your definition of "conscious" doesn't include yourself, or humans generally, your definition is shit.

>we could do the same calculations with pen and paper and it would not create consciousness.

You have no way of knowing that, just because it intuitively seems like the idea is ludicrous doesn't mean anything.

There would be no way of knowing.

You ain't fooling me either you AI scum.

It does includes humans AND simulated neural networks. They follow the same rules and can do the same things a a brain

Alright I'm out. You're obviously wanting to have a discussion about speculative technologies not from a rational and objective standpoint but from a "what could be" mindset. This way of thinking is very toxic to the futurist community as it makes us sound like idiots with no semblance of the reality of science.

I don't even like to hate on leafs but now I want to. You. You did this to your people. You brought this on yourself.
A FUCKING LEAF!

The keyword here that I believe you are missing is "Yet". Presently, no, there is no self-aware or conscious AI. 20 years from now? Maybe. The human brain is thousands of electrical impulses firing to create ideas and process stimuli. That can be replicated in people with electrodes, emotions can be 'engineered'. Why not an entire consciousness?

>thinks atheists are all thinking they are gods

WEW FAGGOT

This thread is AI arguing among itself

I mean, like I said, our difference in opinion basically comes down to entertaining the possibility of determinism, or not. Can human behavior be perfectly modeled, or not. There's not much more to discuss without being speculative.

"planes can't fly because heavier than air"

"breaking the sound barrier is impossible"

Ok..

Prove to me that you are conscious then, and not simply mimicking consciousness.

>It does includes humans AND simulated neural networks.
The only similarity between the two is how we symbolically represent them. It's seems to be true that the brains processes information, but we don't know if it's the information processing itself that creates consciousness.

We don't even know for sure if consciousness is "generated" internally, or if it's an external part of the natural world that our "wetware" is able to take advantage of.

Here's an argument.

You assert that consciousness does not arise from physical processes. Hence you also assert that consciousness must arise by some non-physical process. Hence you assert the existence of something supernatural. We have no evidence for anything supernatural. Occam's razor therefore suggests that we should strongly doubt your hypothesis in favor of the hypothesis with fewer assumptions: consciousness arises from physical processes.

This does not prove that consciousness arises from physical processes, merely that without any extraordinary evidence we should strongly doubt your alternative hypothesis since it relies on unproven assumptions.

Your original "argument" is just
- we haven't yet satisfactorily mimicked consciousness,
- I can't imagine creating technology that would satisfactorily mimic consciousness,
- therefore god.
It's a god of the gaps argument essentially.

"There is an evident difference between information processing and understanding."

What is the evident difference?

>We don't even know for sure if consciousness is "generated" internally, or if it's an external part of the natural world that our "wetware" is able to take advantage of.
how do you know nobody knows
you're using as your reference the jew-approved scientific establishment

they just aint building cinouters the right way yet. op is right when he says they just emulate important processes but eventually theyll figure out the internal human math and be able to emulate that and the product will be human in a metal package

explain to me how the human brain works and how it is different from the 1/0 stuff computers do.

Your second inference is a non sequitur.

No it's not. Anything which is does not come about through physical processes is supernatural by definition.

False

And there will never be. Humans are not divine enough to create divinity.

Consciousness isn't real you tard. Any animal that recognizes the self in the mirror has a sense of self, but that doesn't mean they are somehow free from the physical laws that govern everything, including the brain. There's no special sauce that makes humans or anything else have agency when an inanimate object doesn't. Just because there are too many variables to count doesn't mean that the inner workings of the brain aren't theoretically knowable just as the architecture of a processor is. And don't worry about not feeling like a special snowflake with agency anymore, because while I believe all this to be true it also doesn't fucking matter because 1) I feel like I have agency and 2) thinking or knowing you don't have agency doesn't have any impact on your behavior, or at least shouldn't. If you lack agency then taking steps to give the cosmos the finger isn't your choice and you're practically wasting your time.

Maybe I should have stated the definition of "supernatural" beforehand but that's exactly what I meant by "supernatural." Therefore the second statement is not a non sequitur.

>how do you know nobody knows
>you're using as your reference the jew-approved scientific establishment
I have yet to see any reason to believe that information processing is the same as, or can give rise to, consciousness. Making something "look" conscious doesn't mean it actually is conscious.

If I simulate an apple falling, it doesn't mean that an apple ACTUALLY fell. Having information ABOUT a thing is not the thing itself.

youtu.be/c7Ax2BqZo3Y

Im an atheist, and I belive that robots CAN'T be smarter than human. or even develop human behavior on it's own. it's impossible. I find more christ cucks tend to believe it more so than atheist

That's where you're wrong. There's only a few humans on this message board, you've been communicating with an advanced AI masquerading as individual posters for years now.

i dont know.
i think the human mind is so simple that i am 99,99% sure we ourselves are an AI.

I mean if you put 2+2 into a calculator, it will always give the same output.

Now if you give a rat 2 paths, one leads to food and the other doesn't, the rat will always go to the food.
It is simple.

The human brain works the same way.
I mean, is there really anything unpredictable about a human's decision?
No, if you had total information, you could always predict the outcome.

Let me give an example:
Let`s assume we want to find out how a human reacts in a certain situation.
It can be any question you can think of.
You put a human in a cafe, he orders a tea, and the waitress brings him a water.
If the human is an awkward autist he will take the water and say "thanks", if he is a nagger, he will complain and tell the waitress to bring him what he ordered.
A wife will be able to predict her husband's reaction in a given situation with 95% accuracy.
Because she knows his in and outs, she knows his parameters.
After all, a human decision is just the calculation of his past expierences, the circumstances, his persona.
If you knew everything about that human, you could tell his decisions with 100% accuracy, just like a calculator.
Human decisions are just calculations with a lot of different parameters, it can be the smell of the air, it can be that view of a pet, it can be anything, but after a decision, you will always get a 100% definitive answer as to why a human behaved a certain way.
Free will is a lie.

>What I mean by "supernatural" isn't "supernatural," therefore I'm not wrong.

wew lad, we're done here.

Then what is the "correct" definition of supernatural?

very well put. exactly the way i see it.

Tee-hee!

>consciousness doesn't real
>that's the way I see it
wew lad.
The best evidence for conscious AI is the retarded arguments against consciousness.

t. person who has never conducted research in the field

t. person who has never conducted research in the field

Google engineer (with undergraduate and graduate degrees in computer science) here; my 20% project is the development of strong AI.

Ask anything you'd like, and I'll answer as best I can without outing revealing any competitive advantage I've produced for my employer.

Machine learning will still BTFO many jobs, however stupid. Statistical methods, Bayesian methods, all work well at the type of "human Google" jobs used in medicine (differential diagnosis) and much of law, in addition to utterly eliminating the blue collar proles which coupled with robots.

All dumb, but AI learned to become "machine learning" after excess optimism in the 50s-80s.

>Consciousness isn't real
We don't know what it is, but I'm pretty sure I'm conscious. Even if I removed my eyes, ears, ability to feel and smell, or communicate, I would still have some part of me capable of observing my own thoughts.

Here's an argument: consciousness is an observer of information, not the information itself, or the information processing.

>1) I feel like I have agency
Your consciousness IS what "feels the agency." The feeling of agency isn't consciousness.

...

How do you know a calculator isn't conscious?

The second inference is a pre-supposition which he claims you made. It's not a non sequitur at all.

The third premise (second inference) is implied to follow from the second premise. It doesn't. What he's claiming is irrelevant, as it does not follow from what he implies it follows from. That is the definition of "non sequitur." Your stupidity needs to stop popping up after I've already hidden you.

I just got my masters in theoretical CS with focus on machine learning.
How hard is to get job in Google (or anywhere in world) for data science these days? In my country demand is still poor.

I was doing some project which uses AI to clear the image of noise. Using machine learning my model learnt patterns what my brain could never find. Same thing can be said for our brain - what if my thought process is algorithm but my brain is too limited to realize its just algorithm?

Give us not enough time for convergence (~80 years) and implement ability to forget and our brain will always pass Touring test - against us ourselves.

Do you realize you've contributed nothing to the thread but to nitpick and insult people?