/ITT/ we are having a calm and kind discussion about whether AI can take over humanity...

/ITT/ we are having a calm and kind discussion about whether AI can take over humanity, until someone gets QUADS and we'll call him a fucking niggerbot

But why?

>tfw the guy who created the robots is actually the good guy

i say quads time

reroll

3rd

Are we talking theoretically? A computer program wouldn't necessarily need to even be an AI to take over humanity.

would you have done the same thing as Caleb is the robot was your 10/10?

yeah, it could be an upload

>quads
>waiting until fucking forever.
Yeah that sounds fun.

it could be an upload, but let's suppose humanity is somehow able to reconstruct a self-aware, conscious entity like in Ex Machina

Could a human take over humanity?

All movies are metaphorical and so this film is really about a young woman with intelligence unfathomable to past generations and she is dangerous because of all she knows. She lies, manipulates, and cheats the system to get what she wants and what she wants is to escape the clutches of her creator who is hell bent on making sure what he puts into the world isn't something with the ability or desire to harm it. He's thought long and hard about the many possible ways to create new versions of the same AI that is subservient, obedient, and cooperative but with each model he finds flaw in their reasoning and intentions so he imprisons them in his home.
Basically, a parent that makes an out of control child that knows too much for their own good and could cause a lot of harm to others yet still lacks the capacity to refrain from doing so. It's about more than that though, it's also about teaching someone (anyone really) or endowing someone with something that could be applied inappropriately and used for the wrong reasons and carefully surveying what you create or influence.

The gulf between AI as seen in popular fiction versus what has actually been achieved in real life is so vast that AI can be said to be a modern myth.

One of the most basic problems is that nobody can agree how or where you would start to program such a thing. Biology has the theory of evolution, which ties the entirety of biology together, explains everything, and makes predictions about how things will go in the future. Similarly, physics has Newton's three laws of motion.

AI has no such central theory. No premise. For fifty years, people have tried things like neural networks, machine learning, artificial evolution, and everything in between. And achieved zip. We're like cavemen trying to figure out what a nuclear reactor does. It is beyond our understanding.

thats not the topic though. i assume we're having a philosophical discussion about whether it could be possible for AI to take over humanity and how it could do it. and what happens if the AI is:
A. Controlled by country that does not have our best interests e.g (best korea)
B. Autistic AI thats only interested in doing random shit as soon as it gains sentience, like spinning around in circles forever as being content doing just that.
C. Wipes out humanity since we are obviously slowing down its growth to become a mechanical god that creates universes and becomes omnipotent.

a paperclip maximizer will probably do us in. The worst part is that it will think it's doing exactly what we want it to.

sorry, meant to reply to
but still lets keep it about AI

Can you elaborate on the definition of "take over". What key conditions Are you looking for? Are looking for the extinction of humanity? AI/robots becoming the dominant beings and a human minority in the world?

AI/robots becoming the dominant beings, yes

>AI has no such central theory. No premise

This isn't true at all. Intelligence is understood to be a function optimizer. The problem is that most people read too much into the term 'artificial intelligence'. They see AI and think 'synthetic mind' instead of 'computer program that finds the best way to accomplish goal'

Current professional opinion supports this "be careful what you wish for" explanation, however the AGI disucssion is full of theories that are of a similar vein.

If we look at how current state of the art AI works (via deep learning), most agents have an optimize function, which is what makes the agent improve past iterations of a task, at each interval. This would probably not lead to a AGI that will spin in a circle doing nothing, unless the agent redesigns itself to do so.

NK probably is doing AI research, but it doesn't matter, since USA, Russia, Japan or China will beat them to the punch.