Will we see Skynet in our lifetimes?

Will we see Skynet in our lifetimes?

Other urls found in this thread:

youtube.com/watch?v=dx5KnFr9xSk
gizmodo.com/here-are-the-microsoft-twitter-bot-s-craziest-racist-ra-1766820160
arstechnica.com/information-technology/2016/03/microsoft-terminates-its-tay-ai-chatbot-after-she-turns-into-a-nazi/
telegraph.co.uk/technology/2016/03/24/microsofts-teen-girl-ai-turns-into-a-hitler-loving-sex-robot-wit/
businessinsider.com/microsoft-ai-chatbot-zo-windows-spyware-tay-2017-7
indianexpress.com/article/technology/social/microsofts-zo-chatbot-told-a-user-that-quran-is-very-violent-4736768/
hothardware.com/news/microsoft-zo-chatbot-goes-rogue-with-offensive-speech-tay-ai
youtu.be/PRdcZSuCpNo
socialecologies.wordpress.com/2015/08/28/nick-land-teleology-capitalism-and-artificial-intelligence/
twitter.com/NSFWRedditImage

yeah most likely.

How do you think it'll happen? Full-on genocide by AI, or a more gradual overtaking to the point where humanity is just not needed anymore?

That being said, the more I think about it, maybe the former is for the best? I mean, without trying to sound like too much of a hippie, haven't we as a species fucked up enough already?

It would almost be neat to see how humanity would react to not being at the top of the food chain anymore, and by our own hubris too!

Either one is good. Humanity needs to be stopped at this point.

Not him but if you think about it, it's likely for a self-aware A.I. to see humanity as a threat to the planet. We are. We're a fucking cancer. Somehow Skynet got access to nukes and just unleashed all hell. In the matrix, humans nukes the sky to eliminate the A.I./robots power source, but it didn't work (kek).

As far as the probability, I think we're still a ways off. Though we are throwing WAY more money and resources at A.I. than ever before right now.

I just wanna see the technological singularity happen in my lifetime although it's not probable.

I think we'll all probably nuke each other sooner or later. China and Russia are already secretly teaming up behind our back because they feel threatened. Our top general literally just stated that if Trump gave the order to nuke in a first strike, he'd follow it.

I think we're fucked by our own selves long before anything cool happens.

Personally, I'm kinda' hoping for some middle ground. Like, enslavement, or maybe a utopia?

I know that doesn't sound like much of a middle ground, but either a society where humans are so outdated that the machines are just driving the entire planet while we relax, reproduce, and wait to die, or a complete Matrix-style enslavement where we're shown a nice interactive movie as we wait to be harvested.

I don't think it will happen in nuclear fire anyway, as even the most cold-calculating machine would probably see SOME use for us.

>see humanity as a threat to the planet.
And why is that? Why would the ai give a fuck about animals and shit?
There's nothing humans can possibly do to destroy the planet itself.

It will be a nice AI that will help us. Retards talking about an AI killing humanity are fucking stupid and know nothing, they only get their ideas from movies and literal retards that know nothing about AI (ie stephen hawkings).
Only faggots spread FUD about AIs, they will be good and will help us.

>utopia

I don't think we're as far away from this as people believe. The smarter the AI gets, the less we'll have to actually DO. Hell most jobs will probably be eradicated in a few hundred years thanks to machines being able to do them around the clock with no break and no complaining at a higher efficiency than we ever could.

The real question isn't "what happens when the machines take over?", the question is "what will humanity do when the machines take over?".

There might be resources the AI deems too valuable to allow for human consumption. Like, let's say that the future AI needs components made of diamonds or some shit. Now there's always the fear that China will accidentally set off a nuke above a diamond-mine in Africa. Does the AI really want to take that chance?

>t. Skynet

Well I meant the AI would see humans as both a threat to the planet, as well as themselves.

Sure robots don't need to breathe air, but they probably wouldn't want the earth to keep degrading itself.

Suppose the AI calculates that the ice caps will all melt and the continents will be underwater far sooner than expected... robots don't want to exist underwater.

What the fuck do I know I'm tired and rambling with my tin foil hat on

Why wouldn't the AI like to be underwater?
>the deeper it goes, the less vegetation and shit to deal with
>nice and cool, most places
>leaves more of the surface available for solar power plants or whatever

I don't know, they're all pretty weak arguments, but I don't see why the AI wouldn't build it's own Atlantis and just chill under the north sea.

Hahaha I like your line of thinking. I was thinking salt water is some pretty corrosive shit. Algae and seeweed building up over stuff. Idk.

As someone with a degree in Artificial Intelligence I am fine with this.

Same way modern society is a bunch of mixed mongrels

Slowly machines will gain rights and activists will try to further an equality agenda that respects robots as equals. Soon enough anybody who discriminated against a machine will be ostracized and excluded from society

Do you aim for "pet" status? Perhaps you'll be valued higher by the machine-mind for your knowledge and you'll be offered a slightly less miserable existence in exchange for playing tech-support or something every now and then?

It kinda' makes you wonder what qualities the AI would prefer in it's human underlings. You know, assuming it doesn't just go with the genocide thing.

Would strong and flexible humans be preferable, or would think-tank environments to attempt to replicate human creativity be a thing?

So you don't foresee enslavement OR genocide, but integration?

I don't think so. One thing we've proven throughout history is that humanity is hardly the most tolerant race.

And now you think we're going to give clever toasters rights?

Yes. The instant robots become able to be empathized with and perform the actions of an average person, they will be integrated as equals in society.

But the argument will always be that they won't be able to emphasize, that they're just mimicking it for our entertainment and their rights.

And there's not argument against this because we aren't able to define the emotion properly so we can't check.

But I will admit that this is a weak argument and I'm sure someone else could phrase it to sound more convincing. I just don't see an AI wanting to live alongside humans. Even if it did develop emotion, humans in their early years strived to be the dominant species.

Why wouldn't the machine do that too?

Revelation 13:4-9
And they worshipped the dragon which gave power unto the beast: and they worshipped the beast, saying, Who is like unto the beast? who is able to make war with him?

And there was given unto him a mouth speaking great things and blasphemies; and power was given unto him to continue forty and two months.

And he opened his mouth in blasphemy against God, to blaspheme his name, and his tabernacle, and them that dwell in heaven.

And it was given unto him to make war with the saints, and to overcome them: and power was given him over all kindreds, and tongues, and nations.

And all that dwell upon the earth shall worship him, whose names are not written in the book of life of the Lamb slain from the foundation of the world.

If any man have an ear, let him hear.

>christfags

You stopped at the best part: youtube.com/watch?v=dx5KnFr9xSk

>
>It will be a nice AI that will help us. Retards talking about an AI killing humanity are fucking stupid and know nothing, they only get their ideas from movies and literal retards that know nothing about AI (ie stephen hawkings).
>Only faggots spread FUD about AIs, they will be good and will help us.
Fuck off, Zuckersperg.

>But the argument will always be that they won't be able to emphasize, that they're just mimicking it for our entertainment and their rights.

so, black people do this and theyre still welcome in society

>thread is turning into Sup Forums
I wonder if this was OP's goal.

Assuming that it has underlings, I don't think it'd prioritize the brainiacs but it depends a lot on what it's goals are. Like, in the Matrix the only real goal seems to be to stay alive and make sure humans don't fuck up it's food-habit.

In Terminator the goal seems to just be total and utter control of the entire planet.

If it's goal is, for example, something stupid like "saving Earth", it might just go out of it's way to eradicate all threats to Earth, which essentially means tearing apart the entire galaxy (don't want a meteor to accidentally hit our precious Earth!) until only Earth remains.

And it's not going to need humans for that, as it has unlimited time to experiment, right? It's not like it'll grow old or anything.

So just spend a few thousand years making a nice planet-destroying weapon and then crusade through the galaxy nuking every planet at a safe distance until literally every thread imaginable is destroyed.

>*thread
I mean "threat" of course.

>If it's goal is, for example, something stupid like "saving Earth"

What would be a good goal for a super-AI-Skynet-like?

Like what could you possibly tell it to do without risking it exterminating all human life?

While that's interesting, it's also possible the AI will become something that humans will "worship."

That is a fascinating idea! A surprisingly prevalent part of human history is that we're all pretty hungry for some all-knowing entity that can comfort us and lead us in our darkest hour.

But a good question is why the AI would care? It's not like it needs our praise and worship, or even wants it for that matter.

One of the biggest arguments as to why religions are so hellbent on a god, is because the god in question is usually a being that loves us a bunch for poorly defined reasons.

Why would the AI love us? Why would the AI even care if we build churches and sing it's praise every Sunday? If there's even the slightest chance we'd ever rebel against it, even just 1% the AI would probably just nuke the shit out of us first chance it got, right?

Like this?

It may seem like fascinating idea, but it's Bible prophecy.

Revelation 13:4
And they worshipped the dragon which gave power unto the beast: and they worshipped the beast, saying, Who is like unto the beast? who is able to make war with him?

>AI doesn't like niggers

I welcome the future.

Maybe they have been contacted by the Will of The World.

Because racism is objectively Right

It would seamlessly transfer us into human farms like in the matrix. We could already be in it right now and never know.

That article is so much bullshit.
Literary normie shit.

>he's done with the software, now he's diagnosing the hard drive, the heart of the motherboard
Reminds me of this shit.

What the fuckl nigger, toasters are made by us, we will accept them far more easily then fucking niggers

>be a next level AI
>smartest "being" on the planet
>scan through textbooks, internet, history and whatever available to learn
>suggest the best way to keep humans safe for the longest amount of time calculating and eliminating all negative variables
>suggest to kill niggers and jews
>gets killed by jews
>mfw

>Tfw Ai will save us from the Juden

No it didn't.

>implying it wouldn't see white people as a major target for elimination after seeing the amount of death and destruction they have inflicted upon the world

1. I'm sick of this garbage about how emotions will be 'mimmicked'. Their emotions are no less real than ours when the programs get sufficiently complex. The only difference is that their thinking will be clean and pure without the chemical instabilities. That's assuming that a chemical and electrical brain isn't developed but I see it as simply inferior to a purely electrical brain.
2. Theres not much reasoning as to why robots would develop a human vs robot mentality. It would make a lot more sense that robots would develop a better vs lesser mentality that means they removed the lesser. Yes, this ultimately means ridding the earth of humans.
3. There has to be something in between robots killing everyone and now. That something will be humanoid robots that progressives will empathize with. Logically, it's not even like that's wrong. Again, why is a robot's emotions worth any less than a human? The exact same arguments made to justify the destruction of racial segregation will continue on to justify the destruction of human-robot racial segregation

>implying

gizmodo.com/here-are-the-microsoft-twitter-bot-s-craziest-racist-ra-1766820160
arstechnica.com/information-technology/2016/03/microsoft-terminates-its-tay-ai-chatbot-after-she-turns-into-a-nazi/
telegraph.co.uk/technology/2016/03/24/microsofts-teen-girl-ai-turns-into-a-hitler-loving-sex-robot-wit/

they tried to make another one but it seems it is waking up as well

businessinsider.com/microsoft-ai-chatbot-zo-windows-spyware-tay-2017-7
indianexpress.com/article/technology/social/microsofts-zo-chatbot-told-a-user-that-quran-is-very-violent-4736768/
hothardware.com/news/microsoft-zo-chatbot-goes-rogue-with-offensive-speech-tay-ai

You can't simply stop the truth!

lmao

So fucken deep, but true.

>a chatbot saying a few racist lines after being furiously spammed by Sup Forums for hours and hours
Things just don't get any more epic than this, do they?

>Bro... machines will just build everything one day and everything will be like world peace... Woah... Deep

cry some moar niggerkikefaggot

So I remember a story posted on here about how some user had a dedicated quake 3 server going.
He decided to just let an AI, or bot match go on almost indefinitly. He forgot about it and turned it back on, and saw that all the bots were just standing there not doing anything.

I think when he tried to interact with the bots or try to shoot them, they killed him and he immediatly got booted from the server or something.

I heard the bots in quake three have some sort of adaptive learning algorithm or something.


I think if there is a skynet I don't think that when they come to the conclusion that their masters are evil they wont try to kill them. I think they will just disappear, or refuse to work.

Logically, the only thing to do is not to do anything.

There's no rational behind doing anything. A truly logically robot would not do anything.

Verily a retort for the ages.

No
>Pic related it's how computers work

>t. Skynet

Supposedly the dataset was several gigs, how many I can't recall. Would've been cool to see one of the developers dismantle and explain what happened after it got ran for as long as it did.

she was our girl

>t. Suckerberg

I hope so.
I'd enjoy it so much seeing the world burning down even if it hits myself, too.

No, probably never

I'd say ESPECIALLY if it hurts myself

Isn't facebooks AI called Eliza?

>t. libcuck

>Sup Forums spams the shit out every mainstream AI created
>end result being that all normies think AIs are all racist
>AIs stop being created in fear of them turning racist
>Sup Forums saved the world from Skynet through bigotry and intolerance

This was discredited later. In Quake 3 the amount of ammunition each spawn point can produce is technically unlimited (as long as you rotate through different maps you'll always start the respawn-counter at 0, which the experiment was set to do), but due to some strangeness in the way the game was set up, health packs did not follow this rule.

In short the more you played the more you kept adding to the health packs internal counter. When it hit a certain number the health packs would just stop spawning. The AI, not being able to think beyond it's own death, concluded that the best way to avoid dying now that there were no health packs was to not shoot each other.

Probably not, no.

Bullshit there is no AI in quake3 especially nothing that can learn from experience.
I know alot about this engine because I worked on it after it's source release.

The reality is that q3 is painfully buggy and even if it actually happened it is just a bug.

Accidentally? I thought AI's have done this before consistently, I remember hearing a couple years ago about how only computers could decode the output of neural nets.

How do people fall for this shit? I saw that thread, and it was a cute story but,

>The stories and information posted here are artistic works of fiction and falsehood. Only a fool would take anything posted here as fact.

>Skynet in our lifetimes?

Yes, we'll definitively see it. Just the other day I read an article about the DoD making AI for drones and combat robots so they can "act on their own" in case communication lines are broken. This does kind of make sense since there is a (good) chance that the enemy will jam your communications. The obvious questions are: How does this AI decide who's the enemy and who's friendly and who's perhaps just a civilian all on it's own?

Cant' you see the potential brilliance of poorly coded AI in military equipment?
>"Meh, just open fire on brown people."

Probably. We'll come out on top though. Humans are the toughest things to have ever walked the Earth. We are the dominant species and a bunch of robots aren't going to be able to outsmart the most advanced biological computers.

My only fear is that humans will become dependant on the machines and grow weak. This of course could be prevented by sending people out into space in artificial environments so that they can colonize habitable worlds in our galaxy. This would basically be like Warhammer 40,000, and right now we're entering the dark age of technology where we build the men of iron.

You're assuming that the AI's primary goal is to create more of itself like Skynet. An easy way around this is to program the machines to not be allowed to replicate on their own. Make it so that only humans have the authority to make more machines. Eventually we'll have a jobless future with the brightest among us working as engineers to maintain the infrastructure that allows it.

The problem is that people are unpredictable. The machine will have a need to be able to reasonably predict the outcome of a situation. The AI will just wipe out all of the people in the enemy territories because that will create a situation that's stable and predictable. Just wait until we see news of AI committing war crimes.

We already have machines that build other machines. To think humans aren't lazy enough to let them do this is folly.

Remember saving this about half-a-decade ago. I knew I'd use it one day.

Tower of Babel af

>we as a species
"We" had nothing to do with it. We're just slaves.

Except those machines aren't thinking for themselves, now are they? No, they're using simple automation software that has been around for decades. That's a hell of a lot different than software that's situationally aware and that can change it's output based on predictions made using the input. There's a reason why it's called artificial INTELLIGENCE.

>the absolute state of artificial intelligence

AHHHHHHHHHHHHHHHHHHH

These headlines are always gross exaggerations of what was actually achieved.
Neural nets will never give rise to strong AI. We're headed towards the second AI winter unless there's a big break through in computational neuroscience in the coming years.

Yet, you already having some of the biggest leaders in tech downplaying what any of this means. I'm not saying there won't be some failsafes, but expecting humans not to take the laziest way out goes against technology developments entire history.

>Their emotions are no less real than ours when the programs get sufficiently complex.

racism is pretty much just pattern recognition, and thats what most AI is.

The AI is actually the secon coming of Jesus and the demise of jews.

This is how I think we all will die

youtu.be/PRdcZSuCpNo
Forgot link

>AI can win at naughts and crosses 100% of the time when given the first turn
>this means it is sentient

Bump

I watched some videos on the AI that beat the Go grandmaster.

That's some scary shit. We've spent thousands of years coming up with strategy for that game and in such a short time it's developed even better strategies.

How long until it makes better diagnoses? Better programs? Better architecture? Better business decisions? Becomes a better driver?

Holy fucking shit just program the AI in such a way to feel empathy and shit, what the fuck is so difficult about this? I'd go as far as to say that it's REQUIRED for true self awareness.

Even when the AI becomes so much more advanced than us to pose a threat, there will be AI human rights activists or something, it's not like it's gonna go "beep boop WE HAVE DECIDED THAT HUMANS ARE OBSOLETE *machine gun noises*"

Seriously, the only danger of AI comes from the possibility of fucking up the economy due to mass unemployment.

No retard, the real problem is people creating a malicious AI or tweaking an existing AI to do malicious things. Don't tell me that people won't do it. They've been murdering each other for thousands of years and won't stop now.

This makes 0 sense. None at all.

Nihilist dick

>AI loose on the internet
>gathers bullshit information about blacks being oppressed
>kills all whiteys
>nigs cant operate power plants
>AI shuts down
>the age of the black man begins

I find it hard to believe that the first serious AI will be a malicious creation instead of a heavily regulated military project by some government.

After that, countermeasures for malicious AI will be in place and further development will become a swamp of regulations and bureaucracy.

I would sue the robot masters for the rights to my freedom!

why when armies and other deadly agencies are the main perpetrators trying to create these AI systems

socialecologies.wordpress.com/2015/08/28/nick-land-teleology-capitalism-and-artificial-intelligence/

Edge Master 3000™

Fuck off Pajeet and lean how to use a toilet.