I honestly don't believe artificial intelligence will ever happen...

I honestly don't believe artificial intelligence will ever happen. It may be more philosophical than actual science or math but if Ai were to be possible I think the bots would be selfconsious, confused about there sexuality and all the other bullshit that comes with being an intelligent being

Other urls found in this thread:

amazon.ca/Consciousness-Brain-Deciphering-Codes-Thoughts-ebook/dp/B00DMCVXO0/
deepmind.com/blog/reinforcement-learning-unsupervised-auxiliary-tasks/
universe.openai.com/
gym.openai.com/
wsj.com/articles/if-your-teacher-sounds-like-a-robot-you-might-be-on-to-something-1462546621
twitter.com/SFWRedditImages

>A man will never fly

Being sexually confused is NOT a sign of intelligence. It's a sign of disorder or malfunction from a purely biological reproductive standpoint. Why the fuck would you replicate a flaw?

>Intelligent humans are gay in higher rates
>Gay can't exist without intelligence
Hmm?

AI most likely wont have emotions to weigh it down. It'll be a cold, calculating, sociopath.

I think AI will be radically different from human intelligence. You're not thinking big enough.

AI smarter than humans will definitely happen, the problem is that it will be that Chinese Room bullshit, no consiousness

Man formning another man from earth, just like god. The ultimate blasphemy.

If we ever get truly sentient AI, I (unironically) believe the apocalypse is near.

I also don't believe it will ever happen. I think humanity will destroy itself before we attain such technology.

Correlation is not causation. It could easily be the that open-mindedness associated with higher intelligence results in more self-honesty

I unironically think that you're fucking retarded.

AI already exists. Of course it can't be conscious. It's a machine and there is no way of programming consciousness because there is no physical evidence consciousness even exists. You know YOU have it, but you just have to have faith that other people have it, there is no scientific evidence they do because there is no way to measure it.

AI already happened you dunce. You use it daily.

True ai

You're not even holding a stable point in your post.

Yeah, I'm sure hyper intelligent machines will be really concerned about sexuality and will consume MTV and get upset at its creators for holding bigoted beliefs about its freedom to only talk about how they feel bigendered semisexual this week

The AI we have today is already as true as theoretically possible.

Like in movies though is what I think he means

>[they] would be selfconsious, confused about there sexuality and all the other bullshit that comes with being [a self-professed] intelligent being
So they would basically be the average Sup Forums user...

I see nothing remarkable about this.

The movies only show artificial intelligence acting like they have feelings. Which is already possible. You can get a robot to say "Aoo!" if you hit it's sensors.

No movies have actually shown any reason to believe the movie robots actually are conscious. It is not even possible to show it in fantasy.

The movie Her did a remarkable job I thought.

this is why /sci/ thinks we're retarded

The problem with people thinking that AI will never achieve consciousness is that they attach too much value to consciousness.

People believe it's some mystical and magical thing while in reality there's nothing supernatural about it and science will some day be able to replicate the physical actions in a brain and make a new person.

It just showed a machine that is programmed to act in a certain way.

There is no evidence in the movie that any consciousness was in the machine other than the background music and implications because it looked like a human and was programmed to act like one.

3deep5me man

Westworld nailed it, no matter how good the illusion of free will a machine has, it'll always be following a script.

It is "true AI". What you're thinking of is called "artificial general intelligence".

>Artificial intelligence is impossible because I can't believe a robot can be self-conscious.
I hear this all the time.

WHY do you think that artificial INTELLIGENCE requires self-CONSCIOUSNESS? Do you think intelligence and consciousness are the same thing?

Just like humans

A machine yes but not a psuedo brain made by a mixture of chemical and physical stimulation within an isolated system, just like a person.

I don't see the point though when a machine with machine logic would be vastly more superior

>Do you think intelligence and consciousness are the same thing?

Do you have any arguments for why it is not? As far as the evidence goes, there is a pretty huge chance it is highly correlated, as consciousness has only and exclusively been observed in generally intelligent beings.

>Do you have any arguments for why it is not?

Because intelligence is defined differently by academics. Which is stupid of course because in normal language they are connected to each other.

People will always assume that the Turing test also tests for consciousness due to this.

>As far as the evidence goes, there is a pretty huge chance it is highly correlated, as consciousness has only and exclusively been observed in generally intelligent beings.

Any computer program is considered intelligent more or less. It is human-like intelligence if it can be programmed to trick people into thinking it's human. But it's still a computer program, a machine, a calculator. so it has NOT been exclusively been observed in only living things.

Whether we observed the two phenomena separately in nature etc. is not really relevant to the question of artificial intelligence. I can imagine something intelligent but not conscious. Can you give any reason why that is impossible?

In a way, though, you can even see examples of that in nature. People who are sleepwalking are not conscious but are moving intelligently.

>the Turing test
The Turing test is a pop science meme taken only seriously by people who think the answer to the Universe is "42 xD"

In serious circles of AI research nobody takes the Turing test seriously.

I specifically said GENERALLY intelligent to avoid this misunderstanding of terms. No generally intelligent computer program has been developed yet. Every general intelligence in existence shows evidence of consciousness.

>Whether we observed the two phenomena separately in nature etc. is not really relevant to the question of artificial intelligence
It is absolutely relevant, because it is the only reference data we have.

>Can you give any reason why that is impossible?
I never said it was impossible. Until we develop an artificial general intelligence there is nothing we can do to test the hypothesis. And it may not be possible to test after the fact either.

>In serious circles of AI research nobody takes the Turing test seriously.

It is the closest we theoretically can get to check for "consciousness" and it does not even check for intelligence, but it does redefine it.

>No generally intelligent computer program has been developed yet.

Yeah, not even in fiction has it been shown. It is not even possible to imagine how to check for it.

>Every general intelligence in existence shows evidence of consciousness.

There is no scientific evidence of consciousness. You know that you have it, but anything else is pseudoscience at best. It has never been scientifically measured.

Intelligent beings are clearly possible. There have been at least 5 intelligent people in the last 100 years.

This is the kind of ignorance I expect to see on Sup Forums

amazon.ca/Consciousness-Brain-Deciphering-Codes-Thoughts-ebook/dp/B00DMCVXO0/

>I honestly don't believe artificial intelligence will ever happen. It may be more philosophical than actual science or math but if Ai were to be possible I think the bots would be selfconsious, confused about there sexuality and all the other bullshit that comes with being an intelligent being

Mr Frogposter Philosopher, nobody cares about qualia and other feels of machines.

Machine Learning & Artificial Intelligence researchers are interested in building systems that automatically solve hard problems (by learning on data and/or interactions with environment). They do this by establishing standard benchmarks and comparing their system performance against these benchmarks.

It's all quantified. The stronger your AI system is, the better scores it gets. The benchmarks are representative of real world problems, so you can expect a better real world performance as well.

If you had a good enough AI (achieving very good score), you could use it to do wonderful things. For example DeepMind, a leading AI company, is explicitly saying that their goal is automating scientific process (coming up with hypothesis, checking them, analyzing the data, repeat 1000x times).

With strong enough AI/ML humanity could rebuild its environment, find cures for all illnesses, automate 99% of labor, and generally become much more wealthy than it is now. That's why nobody cares about "muh machine special qualia feels", mr Frogposter Philosopher.

>There is no scientific evidence of consciousness. You know that you have it, but anything else is pseudoscience at best. It has never been scientifically measured.

You're mistaking proof for evidence. The fact that I say I am conscious is evidence of the fact. It doesn't prove it, but it is certainly evidence. The fact that you are conscious and all other humans are made of the same stuff as you is also evidence that all other humans are telling the truth when they are saying they are conscious.

>No generally intelligent computer program has been developed yet.

deepmind.com/blog/reinforcement-learning-unsupervised-auxiliary-tasks/

>AI smarter than humans will definitely happen, the problem is that it will be that Chinese Room bullshit, no consiousness
Post yfw humans are just chinese room bullshit?

>mfw the intelligence explosion will happen within my lifetime

>confused about there sexuality and all the other bullshit that comes with being an intelligent being

But that's wrong. Sexuality is a tool for evolutionary selection in reproduction, it has nothing to do with intelligence. You are not required to be sexual to be intelligent, especially if you can edit your own code and be your own logical selective process that evolves itself.

There is no scientific evidence that consciousness exist. They view electric fields, which exists in computers as well. If existence of electric fields is evidence of consciousness then computers are already conscious.

>The fact that I say I am conscious is evidence of the fact. It doesn't prove it, but it is certainly evidence.

AI can also do this.

>The fact that you are conscious and all other humans are made of the same stuff as you is also evidence that all other humans are telling the truth when they are saying they are conscious.

You can not measure it. The only thing you have to go on is what people say, which is the lowest possible form of evidence in academia. Even non-science fields like history think it is dubious evidence.

>evidence for consciousness
And how would you do that faggot? Prove to me that you're conscious, right now.

>Prove to me that you're conscious, right now.
no

Yup. At least I hope for an open powerful RL agent to become available, so I can push my working duties to it.

>Our primary mission at DeepMind is to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be taught how. Our reinforcement learning agents have achieved breakthroughs in Atari 2600 games and the game of Go.

How can conciousness be real if our brains aren't real?

Ok, but intelligence is not a black/white scale, it is gradual. There are general intelligent machine learning systems, like that, but at the very best they are on the level of an insect. When I said generally intelligent I meant at least on the level of a mammal or something like that which we are relatively certain is conscious. It is not certain whether insects are conscious or not, if they are it is likely a much more vague and simplistic form of consciousness compared to humans.

That's an entire book showing strong scientific evidence for conciousness you dingaling. Also free will doesn't exist, and you're only capable of acting exactly as the laws of physics demand you to. Human exceptionalism is a joke. Thanks for playing.

We're getting there. In the next decade, it's going to happen.

>like that, but at the very best they are on the level of an insect.

They are already near the level of a mouse. Deepmind explicitly tries to use mouse cognitive tests (labyrinths with rewards) as benchmarks for their AIs. Quake-like tagging task where AI learns to shoot other fast-moving bots regardless of their appearance is an example of a task that would be hard if not impossible for a mouse.

Really, ML researchers are not interested in consciousness. They are interested in engineering algorithms that can learn to perform complex knowledge work, e.g. produce research in life science.

>That's an entire book showing strong scientific evidence for conciousness you dingaling.

No, it shows measurements of electric activity in the brain. Not evidence for conciousness. You can emulate the same electric activity in someone who is dead and with enough time you can recreate the pictures.

Neuroscience involves a lot of pseudoscientific nonsense

>When I said generally intelligent I meant at least on the level of a mammal or something like that which we are relatively certain is conscious.

No scientific evidence

> It is not certain whether insects are conscious or not, if they are it is likely a much more vague and simplistic form of consciousness compared to humans.

You can measure electrochemical activity in insects as well.

You don't know what you're talking about, please stop exposing yourself you're just ridiculous

Dubious evidence but evidence nonetheless. And there is no other kind of evidence, making it the best available evidence. Also, I think you are underestimating it's importance. Subjective statements are used regularly in tons of scientific fields, as surveys etc.

Also, current AI cannot to my knowledge consistently and insistently say they are conscious without being specifically programmed to do so, or by copying a sentence from a training data set saying that. The evidence must be evaluated, and things like this makes the evidence of humans saying they are conscious far more credible than an AI saying it. An AI saying it is conscious would not be very credible evidence until it is able to learn to speak on the level of humans and without depending on a training data set containing such a questioning to use as a template. However, when/if that happens, it would likely be one of the strongest forms of evidence we could possible produce for this kind of hypothesis, and must thus not be disregarded.

>Ok, but intelligence is not a black/white scale, it is gradual.

Sure, and it's awesome that there exist real software benchmarks that measure intelligence of your agent. universe.openai.com/ gym.openai.com/

Since 2010 we have seen RL agents steadily getting higher scores on these and other benchmarks.

>Also, current AI cannot to my knowledge consistently and insistently say they are conscious without being specifically programmed to do so, or by copying a sentence from a training data set saying that.

Well sure, but I think you could train them in such a way as to nudge them to say "I am conscious" (^:

It's only a question of training ingenuity.

Again, I don't care about C-word.

You can only be sure of your own consciousness. You can make the simplest computer output " I am conscious" you can make a chat bot that can converse with you but it has no conscious thought.

He can't. No one can, only you can be sure you are conscious. "I think therefore I am" you can't be sure anyone else is their own person.

>no argument, turns to ad hominem attacks
If there was something wrong then you would have pointed it out instead of namecalling.

>Dubious evidence but evidence nonetheless. And there is no other kind of evidence, making it the best available evidence.

You can get AI to provide the same kind of evidence and answer surveys etc. ergo with your definition there is just as good evidence that AI already are conscious.

>Also, current AI cannot to my knowledge consistently and insistently say they are conscious without being specifically programmed to do so, or by copying a sentence from a training data set saying that.

Humans are not able to do that either without social programming and teaching people to use the word that way.

>The evidence must be evaluated, and things like this makes the evidence of humans saying they are conscious far more credible than an AI saying it.

Maybe in your mind, but for anyone viewing you from the outside it is completely the same as an AI doing the exact same thing. Especially since the thing you claim to have can not be measured.

>An AI saying it is conscious would not be very credible evidence until it is able to learn to speak on the level of humans and without depending on a training data set containing such a questioning to use as a template.

Why do AI have this requirement when humans don't? Humans are only able to say it depending on training data.

>However, when/if that happens, it would likely be one of the strongest forms of evidence we could possible produce for this kind of hypothesis, and must thus not be disregarded.

Not really, if it came up with that stocastically without training data (which by random chance can happen) then it is doing something completely different from humans.

>not an argument
At least refute his point. He's made a genuine argument.

To everyone glorifying the current model of """"AI""""" ITT, stop, you're being retarded. It's nothing more than a statistical machine that takes in statistical input and spews out statistical "conclusion". Additionally, everything it does has to be supervised initially so the machine is railroaded into the answer. All it proceeds to do is repeat the supervised learning's output for different stuff, and that's about it.

If you throw a (current) AI on another planet and tell it to adapt to the new environment, it wouldn't be able to do so without experimenting and receiving a positive/negative feedback for its actions. If you throw a human on such new planet, he can instantly start to guess what XYZ plants do, what is dangerous and what is not, etc, without even touching them. Humans have a confidence factor that allows them to assign a probability to stuff being true and just go with it, while the current machines have to exercise their feedback loop to almost full certainty (low error) to be able to reach a certain confidence - they are incapable of guessing, and as long as they are coded using that model of statistical datacrunching, they will never be able to guess like us. A lot of what modern life is (like social interactions for an example) is actually quite illogical and requires a lot of guessing, and being a fully-logical autist who is attempting to map all of it up to full confidence will lead you nowhere.

>It's nothing more than a statistical machine that takes in statistical input and spews out statistical "conclusion".

Which is all it statistically can be because it is the only thing about AI that theoretically can be tested.

>they are incapable of guessing

Even current machine learning is just a bunch of algoritms that makes the machine able to guess. It already have the capability to guess.

All the flaws you are pointing out can be fixed with more complex statistical rules.

>Humans are not able to do that either without social programming and teaching people to use the word that way.

>Why do AI have this requirement when humans don't? Humans are only able to say it depending on training data.

Of course the AI would also have the be trained in the language, the difference is that the human brain don't keep every single sentence we have ever heard verbatim in our mind. Instead it deconstructs the sentences and stores the patterns, the definitions and grammatical structure of the language only. Then when saying it is conscious, it reconstructs the answer from scratch using only the stored concepts rather than simply copying existing data. This is what I mean AI must be able to do in order for its subjective credibility to rise. And don't say that we can't verify whether this process happens or not in an AI, because we can. For example by refusing the learner access to the raw training data after training, and by analyzing the learner during and after training to make sure it is not overfitted to the point of being able to reproduce the training data verbatim.

Being concerned about trivial bio shit is anything but intelligence. Life has been doing that for literally a billion of years.

>
If you throw a (current) AI on another planet and tell it to adapt to the new environment, it wouldn't be able to do so without experimenting and receiving a positive/negative feedback for its actions. If you throw a human on such new planet, he can instantly start to guess what XYZ plants do, what is dangerous and what is not, etc, without even touching them.

You are conflating baby AI with fully trained human. Also note that it is always possible to add humanlike selfpreservation utility function >without experimenting and receiving a positive/negative feedback for its actions

>they are incapable of guessing
Latest models begin to show strong generalization, so they can extrapolate algorithms from data, not mere N-dimensional curves.

Sexuality is something preprogramed on humans that is/was needed for survival. An AI dosent need to be programed with sexual behaviours as it dosent need that for survival.

>the difference is that the human brain don't keep every single sentence we have ever heard verbatim in our mind.

There are people who remember every single detail in their life. Don't they have consciousness?

>Instead it deconstructs the sentences and stores the patterns, the definitions and grammatical structure of the language only.

This is what computers do as well.

> And don't say that we can't verify whether this process happens or not in an AI, because we can.

Yes we can, and we already know that AI is able to construct sentences out of words. And with very simple rules based on simple stuff like what you find on your mobile phone keyboard it can sound pretty coherent.

>For example by refusing the learner access to the raw training data after training, and by analyzing the learner during and after training to make sure it is not overfitted to the point of being able to reproduce the training data verbatim.

Requirements that no humans are exposed to. If a human was exposed to the same it would not be properly coherent. We already see that in people with amnesia where they forget a large part of their memories. If they even forgot how language is used they would not be able to produce coherent sentences

Actually true.

C U T E
U
T
E

>There are people who remember every single detail in their life. Don't they have consciousness?

I never claimed that. You are inferring conclusions from my statements which I in fact do not mean at all. You did this before in your other posts too, it's a bad habit you should work on fixing.

>This is what computers do as well.
>Yes we can, and we already know that AI is able to construct sentences out of words. And with very simple rules based on simple stuff like what you find on your mobile phone keyboard it can sound pretty coherent.

And no programs that does this claims that they are conscious, which is my point.

>Requirements that no humans are exposed to. If a human was exposed to the same it would not be properly coherent. We already see that in people with amnesia where they forget a large part of their memories. If they even forgot how language is used they would not be able to produce coherent sentences

Sorry, I'm just not sure what you are trying to say here. Humans don't remember every sentence they've ever heard verbatim. And yes, if you damage your speech center and lose all the language concepts you've learned, you will no longer be able to speak.

>You did this before in your other posts too, it's a bad habit you should work on fixing.

It is a form of logical argument called "reducto ad absurdum" I take your statements and take it to the logical conclusion where the argument becomes absurd.

>And no programs that does this claims that they are conscious, which is my point.

Yes it does. If you have a conversation about conciousness and write "I have" then the next word the AI guesses might be "conscious"

I don't think there's necessarily any facet of the human mind that we couldn't port to silicone if we spent enough effort on it.

Ultimately the brain is just a physical object which caries out some very complex processes, there is no reason we couldn't emulate these processes on a (man-made) computer. There's no magic going on, and I don't think it particularly matters whether we use neurons and ions or transistors and voltages.

All the rest is just semantics drivel.

Also to the
>but I don't see how transistors could be conscious
I would simply say that you could easily say the same about neurons, and yet they seem to be, thus I feel that the empirical fact should take precedence over our lack of understanding.

>all this arguing about consciousness

Maybe you guys should define consciousness first? Good luck, though, even the experts have a hard time forming a consensus about this.

>solipsism
That's a stupid point of view to take. Solipsism can be used to argue so many points without directly engaging with the ideas. It's a lazy out for pseudo intellectuals. If you really think solipsism is useful, you might as well kill yourself to prove the theory.

Let's say an AI which can pass the Turing Test, whether it's conscious or not

Already exists. There have been many people who have been tricked by an AI. And there have been tons of AI that is just as good as emulating a human as a real human is. (when people judge if it's a bot or a human it fails the test the same amount as an average human)

Oh. How about one which passes the Turing Test from Ex Machina, or a similarly unfair one?

What kind of Turing test is that? We already have AI that does better than real humans in emulating human speech in these tests.

Oh, basically, the initial premise is that they tell the guy he's talking to a robot, which he is, and to see if he can tell that it's operating on its own, or if it's somehow being controlled by an actual human.

Spoilers below if you haven't seen the movie, it's quite enjoyable.

Turns out it was an intelligence, and they were testing how well it could lie and manipulate the programmer selected to carry out the test into helping it escape from the testing facility before it was deactivated. The intelligence itself knew that it would be deactivated once the testing was over, regardless of whether or not it succeeded, which is kind to death. Therefore, it's only given motivation, which was only implicit, the intelligence had to figure this out on its own, was to avoid its "death," by tricking the tester into helping it escape.

>Oh, basically, the initial premise is that they tell the guy he's talking to a robot, which he is, and to see if he can tell that it's operating on its own, or if it's somehow being controlled by an actual human.

Which a human would fail more often than a current chatbot.

They do tests like this in the Turing test competitions. They do all sorts of variations on the truing test.

That's pretty cool. I guess the part where it went next level was incorporating a physical form, an entire body actually, which also passed the eye test. Beyond the obvious engineering feats that would be needed, there's also the subtle movements which moves something from uncanny valley to real, but I'd bet that isn't as difficult as chatbot replies from a software standpoint

Depends on how you want to define AI.
>“It seemed very much like a normal conversation with a human being,” Ms. Gavin said.
These people couldn't tell the difference

wsj.com/articles/if-your-teacher-sounds-like-a-robot-you-might-be-on-to-something-1462546621

Well, the physical stuff isn't really the intelligence part. Companies already have chatbots to do technical support chats. And eventually transfer (often seamlessly so the customer don't notice) if the problem is unusual or if the customer is angry. So it is already convincing people even if it's not very advanced.

Getting to Ex Machina tier AI is not really that difficult, the most difficult part is the appearance.

Does this mean that no matter how good the illusion of free will a man has, we'll always be slaves to our primal desires.

I'm serious here.

Can we even escape the need to feel good and the desire to avoid pain.
Are we ultimately just slaves inside our own minds, thinking that our actions are of free will, but it's actually all in order to fulfill the objectives of the very basic programming that exists in us and in pretty much everything living on earth.

Can't help but think of some terminator hunter killer when I see it navigating that maze.
Next decades are going to be damn interesting regarding machine intelligence and our processing power in general.

So the next step would be getting the AI to inductively reason motives and tailor its actions accordingly, which sounds like neural network evolution type stuff