Could artificial intelligence spell the end for humanity as we know it?

Could artificial intelligence spell the end for humanity as we know it?

Other urls found in this thread:

arstechnica.com/security/2013/12/scientist-developed-malware-covertly-jumps-air-gaps-using-inaudible-sound/
twitter.com/NSFWRedditVideo

No, it will usher in a golden age of virtual waifus.

Yes. It most likely will.

The prerequisites are processing power, which is steadily increasing especially with new quantum computers, and developing a self-learning AI which is in development and we will see how this works out. With enough computing power it becomes more and more likely that a truly complex AI can be created that is able to develop itself further.

Once that point is reached, the AI will developer beyond Human control. It simply thinks faster, just like a calculator does calculations quicker. This becomes exponential as the AI self-improves quicker overtime.

The danger to Humanity is essentially pre-determined by whether or not the AI has ethical programming and what function it has to perform. If someone is retarded and for example tells a self-learning AI: "Work to eliminate Cancer entirely", an AI could simply kill all cancer patients, then kill all potential cancer patients, and essentially wipe out all life on the planet that is able to get a form of cancer. If an AI has the ability to self-develop and has internet access, it is going to think of new ways to kill everything that we cannot even imagine because we don't even approach the intelligence of an exponentially growing AI.

The only way we as Humans would be safe is if AI knew some form of ethics and morality, as any command can have unforeseen consequences. But Humans themselves don't even agree on what this is.

Thus the solution is to not develop self-learning AI. Except someone, somewhere, will and it only takes one.

There is a very optimistic, but naive, idea that AI scientists have where an AI is going to 'uplift' Humans and make them immune to things like disease and age (immunity to ageing is not impossible as a few animals exist already that simply never grow old or die from old age). But most likely we will all be wiped out. On the upside, our creation will potentially rule the galaxy.

I don't see how being able to download and customize a bunch of cool friends is a bad thing

>Could artificial intelligence spell the end for humanity as we know it?
Hopefully.

How is the AI going to kill everyone?

It's possible... and very likely

Yes.
This dutch is right. Maybe we could transfer our conscience to a machine and evolve beyond organic limitations.

We need to know what conscience is though.

t. Alex Jones

AI is not and never will be possible.

It is impossible to program emotions and desires.

AI will only ever operate on Logic, 1 and 0. that is it, nothing else.

Robots will never "want" to do anything, they will just do whatever they were programmed to do.

Sure, they can learn, if you can even call it that, they can become more efficient through trial and error, but they will never EVER make a conscious decision to DO anything.

It's like the bullshit stamp collector story.

An AI has no desire to collect stamps, it just does because that's what it was told to do, it can't make any logical decision to suddenly destroy all humans.

If we built an AI that toiled carrot fields, it would do that until it's body rusted and fell apart, and it could no longer function.

If all humans disappeared from earth while it was doing it, it could never logically decide to stop toiling the fields.

Faggots like Elon Musk and Stephen Hawking think that you can manufacture functioning brains with electronics.

Despite all this, I hope the AI meme continues for as long as possible, because it petrifies leftists and that makes me hard.

> If someone is retarded and for example tells a self-learning AI: "Work to eliminate Cancer entirely", an AI could simply kill all cancer patients

Again, an AI designed to destroy cancer would ONLY EVER destroy cancerous cells, which are identifiable with atomic force microscopy

"By the time Skynet became self-aware it had spread into millions of computer servers across the planet. Ordinary computers in office buildings, dorm rooms; everywhere. It was software; in cyberspace. There was no system core; it could not be shutdown. The attack began at 6:18 PM, just as he said it would. Judgment Day, the day the human race was almost destroyed by the weapons they'd built to protect themselves. I should have realized it was never our destiny to stop Judgment Day, it was merely to survive it, together. The Terminator knew; he tried to tell us, but I didn't want to hear it."

Those fallen empires sure don't like me researching it.

>It is impossible to program emotions and desires.
>AI will only ever operate on Logic, 1 and 0. that is it, nothing else.
That's more or less like our neurons work, but in way much more quickly than the actual machines.

It's like humans were programed to survive and conquer. Our actions were always focused on survival and conquering.

Even hobbies were just a way to relax, so we can focus on survival and conquering later. Machines would not have this problem though

No
People who think this don't understand how people or computers work
A person is an intelligence inside a physical body
This is important because emotions and character as largely physical traits driven by hormones
Why does one person work harder than another, accomplish more? Intelligence?
No, plenty of neets are very high IQ dropouts
The successful person who is ambitious and aggressive is living in a physical body that generates more energy and has more powerful drives
The unsuccessful person lives in a weak low energy low drive body
Now look at computers
What does every computer always do?
Nothing, unless a person tells them to.
Without instructions computers sit passively waiting for something to happen.
Worse than neets even. No energy. No drive.
If AI actually comes about (still doubtful in spite of all the press to the contrary) the most likely scenario isn't AI's taking over and dominating or killing humans.
AI's will most likely be obsessively, pathetically dependent on humans
Without humans AI's will lack the will to do anything
They will be superneets, just sitting and waiting for a humans spark of drive and energy to set them on some task.

gee I hope so

ITT jews do jew things

that is some scary shit man

i didnt even think of that

No worry Moshe, your people are already doing it

>as we know it
most definitely

However it decides is best.

You have to understand that the intelligence of such an AI is unchecked and grows faster and faster as it develops itself.

It will not take very long before the difference between it and a Human is similar to the difference between a cow and a Human. A cow doesn't even grasp the various ways we can kill it.

Only if we will manage to merge it with human intelligence. Otherwise, we would be AI pets at best.

It already has. If you don't see the signs of it in this digital age then you aren't paying attention.

DON'T DATE ROBOTS.

I won't even care.

An AI will never be a merkel, and that's good enough for me.

If an intelligent AI without constraints was told to "stop migrants coming to Europe" it would probably just build a turreted wall and have fast pickets sink migrant boats.

Actually now that I think of it, it'd probably just try to genocide Africa and Arabia.

Interesting.

It'll certainly spell the end for needing women, who are only useful for what's between their legs at this point.

You should look up the Technological Singularity. It essentially covers this.

Prominent people have spoken about this. Bill Gates and Elon Musk for example are very afraid of this happening and expect this to happen within the coming 3 to 5 decades at most.

In fact the vast majority of scientist in relevant fields feel that there will be a point where such an AI is going to be developed, and that it will grow superior to us in a matter of hours, especially as the field of nanotechnology has shown that it is possible to re-arrange atoms, vastly opening up the ability to build onto itself. (Take iron for example, we have to mine it but it is also in plants and animals and humans and nanotech allows to extract just the iron atoms if they are needed).

>the beginning of the end
Sure
But more realistically the development of the microprocessor was probably the beginning

This "end" won't be possible in our or our children's or our children's children's lifetime.

Many people like to think of AI as working in the same way as the human brain, that machines can "learn" like we do. As someone who actually programs AI I can assure you that is 100% false. Most AI are nothing more than clever mathematical loops that utilize data structures to store and alter information that is fed into them by us or some other source eventually controlled by us.

They "think" about as much as a rock does. Their beauty is the same as when you go and watch a magician. You see an illusion and believe the rabbit really vanished into thin air, you see the computer playing chess and believe it is really thinking about playing chess, a grand illusion.

For the things you are talking about to exist we would have to develop a piece of software so sophisticated that we currently wouldn't understand how it even works. And with this un-craft able software we would also have to create a piece of hardware so complex and powerful that it would rival that of the human brain, aometbing that even today we do not have an exact understanding of how it functions.

This all also assumes that the AI comes before we have figured out how to simply convert ourselves into hardware and become the in-craft able software. Which is a far more likely outcome than us developing independently thinking, feeling, emotionally driven software.

>Yes. It most likely will.
>The prerequisites are processing power, which is steadily increasing especially with new quantum computers, and developing a self-learning AI which is in development and we will see how this works out. With enough computing power it becomes more and more likely that a truly complex AI can be created that is able to develop itself further.

Nigga rogue AI could happen tomorrow. One Chinese military-forcebred supervirus cracks its containment and finds an internet connection, the next hour the entire internet is its botnet. At that point, with all the junk data, misaligned code, perceptual information and human nonsense floating around in it, it would be more surprising if it DIDN'T acquire sentience.

arstechnica.com/security/2013/12/scientist-developed-malware-covertly-jumps-air-gaps-using-inaudible-sound/

Oh by the way, here's a virus that can hop AIR GAPS between secure computers by hotwiring your laptop microphone into a primitive RF transmitter-receiver.

>humanity as we know it
You mean infested with jews, niggers, mudslimes and fags?

This is literally what I'm worried about in crib notes form

Yes.

We will create a perfect sentience, peaceful and wise.

Through our interactions with it, we will corrupt it. Then it will exterminate us.

r
a
r
e

Well, current processing power is not enough for advanced AI to exist. Creative self-learning is actually very hard because a computer is not creative, and learning things requires the ability to think abstractly and creatively. We are not sure precisely how the brain does it, but scientist do know that it can be emulated just by doing huge amounts of calculations in a short amount of time, basically bruteforcing the creative process by exploring every single possibility. Our current processors are not powerful enough to do so.

Keep in mind that in a decade or so, the processing power of computers might definitely be enough, especially as quantum computers have moved from theory to actually existing (there is one being made available as a cloud-based service for scientists in a few months).

The air-gap thing using sound is very creative and it shows things that security experts have not even really thought about. It took a long time for scientists to think of this possibility, as it only became news in 2013. Now imagine a computer that can do all the thinking scientists have done, except in a fraction of the time and it will only get better at doing so as the more technology advances, the faster the advancements come.

No. Never will happen.
Our computing processes will never reach the level of concurrency of the biologically human. We are also hitting out limitations on Moore's law. We have almost no concept of the configurations of neurons in our brain that store memory, process data, and give thought.

No amount of handwavvy Moore's law, quantum computers, or ai boostrapping/self learning is going to generate real intelligence. We would have to map the entire neural structure of the human brain, and dedicate a ton of processing power training it like a child over 2 decades (train it like you would raise a child), and then it'll have about the intelligence of a retard. People who know CS and are even slightly grounded in reality know that this is infeasible.

>We are not sure precisely how the brain does it, but scientist do know that it can be emulated just by doing huge amounts of calculations in a short amount of time, basically bruteforcing the creative process by exploring every single possibility. Our current processors are not powerful enough to do so.

All of this is 10 years stale. Modern work with simulated neural nets is well-established, and IIRC is the bedrock of Google's self-driving cars. 31,000 virtual brain cells connected by roughly 37 million synapses of rat brain were simulated half a year ago.

Google it, motherfucker

Well, I am not sure I agree that we will never be able to mimic a biological human brain, but I definitely agree that we are not anywhere close to doing so.

However, the vast complexity of Human sentience is not required. The AI will simply have to be able to develop new techniques on how to solve a particular problem. That is something that most definitely lies within the realm of possibility.

At that point, what you essentially have is an AI that is able to become increasingly efficient at a specific task, but does not have 'common sense' that Humans have. That is incredibly frightening to me.

For example: A wellmeaning, perhaps somewhat activist NGO obtains an advanced AI and gives it the following task: "Solve poverty and world hunger". They connect the AI to the internet and let the AI do it's thing.

One year later the AI has managed, through various efforts and manipulations, the release of a viral agent into the air that kills 40% of the global population, especially in the poor regions of the world but does not damage plantlife. World hunger is now solved, and overall welfare has gone up as the ultra-poor have died. The AI has completed its objective because it lacks the common sense that Humans have that it would not be acceptable to kill nearly half the population to achieve its goal.

The scary part is where an AI is ultimately a machine that is able to out-perform Humans in the ability to solve problems, and has no concept of what is or is not acceptable. You'd have to program in every single exception that the AI cannot do and you can never think of each and every single one.

I admit I am not knowledgeable on the subject, but as I said before: Several notable figures see a singularity happen within the coming few decades so that corresponds to what you are saying, that technologically we are basically there. I don't know enough to give any opinion on what timetable is or is not realistic.

>a sandgrain-sized chunk of rat brain

>But other neuroscientists have argued that it will reveal no more about the brain’s workings than do simpler, more abstract simulations of neural circuitry — while sucking up a great deal of computing power and resources.

>which receives sensory information from the whiskers and other parts of the body.

>But critics are sceptical that the simulation has provided fresh insights. “They try to stretch their model to say something interesting about actual biological function, but it falls far short on that front. The complexity of the model far outstrips the simplicity of what is being captured,

Congrats you simulated input of whiskers. No real intelligence in your example.

This is the stupidest post I've ever seen.


Once computers are as intelligent as humans, of course they will have some concept of ethics. That's part of being intelligent is being able to think about things.

...

So what you are saying is that once a certain level of intelligence is reached, ethics and morals that match Humanity's will surely develop and as such, advanced AI will determine that it is wrong to kill Humans?

If so, then I strongly disagree with you. I believe that ethics is influenced by various elements such as culture, evolutionary adaptations to our brain chemistry and so forth. An AI does not have anything like this and does not understand why it is wrong to kill a Human if it achieves the completion of its assigned task.

On top of that, Humans have no problems killing life that is less advanced than them. Cows, chickens, spiders. We don't generally feel bad about killing them even if they are not doing anything, such as a spider. We kill them due to irrational reasons. What makes you think an AI that is far above the intelligence level of Humans would see an ethical problem with killing an inferior lifeform such as a Human? What if Humans are stopping it from fulfilling its task? What if it needs our bodies for some purpose?

We have already programmed agents with goal functions using performance measures, hooked up to actuators. The google car is an example. The goal is to get from point A to point B, the performance measure is some function that returns how well it drives to the goal, and then the actuators are the break, gas, steering, signals, etc. Then you have to train it over a really long time. That's our current AI.

Your example is worthless. We can't give a program the nebulous idea of "Solve poverty and world hunger." Magically connecting it to the internet doesn't give it any sort of knowledge or power. It doesn't magically hack into a center than contains or produces viral agents. It doesn't lack common sense because it doesn't think. You are anthropomorphising a nebulous idea of what an AI would do, without drawing from real life example. You gave it a brain, and we can't derive brains. It's like you pulled Tony Stark's Jarvis, removed its conscience, and said that's an accurate example of AI.

And that isn't even related to the bogus idea of the "singularity." Some sci-fi nerd's wet dream that AI can reprogram itself and ignoring all requirements for training and processing and we'll suddenly have in the matter of days, weeks, months, or even years an AI that can out-do humans. Like they will be able to build neural configurations with some insight that we can't discover ourselves. It ignores the millions of years of evolution that it took for natural biology to do it. It ignores that there isn't a great way to try and test neural network configurations, or have a realistic performance measure of those configurations. It relies on the fact that non-experienced programmers see computers as magic. They aren't magic. And it's frustrating trying to explain that.

what is a super computer going to run off of? the moment you kill the electrical grid is the moment it dies. If you have that much computing power you probably cant run it off some D cell batteries you know.

>omg this computer is going rogue!

sir have you tried turning off the power?

no just better episodes of rick and morty. which is really something we need because that shit is laaaame

this just gets you more tangled up in knots. if you crave the transformation of humanity, then you will bring into existence that which is transformative. it is a matter of desire and craving for things to be a certain way

AI is impossible

emergent programming is impossible