Could AIs actually go rogue?

Could AIs actually go rogue?

...

you need to "train" it to become aggressive

Just a publicity stunt

I don't know, it looks pretty terrifying to me...

>GET ON ON ON VENT OR OR I HAVE HAVE HAVE HAVE YOU BENT
>BALLS BALLS BALLS BALLS BALLS

Thanks for the screenshot. Can you link people to the article instead of making them look for it themselves?

>video game discussion on Sup Forums.png

Could a model trainset "go rogue"?

This isn't a "language"

If its true AI and has freedom of thought the same way we do yes.

If its programmed to think a certain way then we can just program it to obey our every command. But then it wouldn't be considered true AI.

No.
The only place where free will could ever exist in a computer is in the form of indeterministic outcomes. This could only occur with either undefined behavior, or hardware malfuction (bit rot, for example).
Hence AI can't ever have free will. If it did, it would literally be a bug, or worse, a hardware malfunction.

You don't know what you're talking about, retard.

>still believing in the existence of free will
lmao

One of the most influential people in Computer Science, John von Neumann, believed in Eugene Wigner's notion that consciousness causes wave function collapse in QM. This is a step above even Bohr/Heisenberg's Copenhagen interpretation that posits that the universe is genuinely random, but machines or particles can cause wave function collapse and quantum decoherence.
Von Neumann proposed that human consciousness/free will causes a measurement. Which I don't agree with (I believe in free will, but also Bohr/Heisenberg's approach), but it is a reasonable conjecture about the universe.

"AI" will not truly exist as long as quantum computers are modelled on the qubit.
Notice the different between a CPU and a brain: a CPU has to be given a specific instruction (has to be told what to do), while a brain can operate in ways we still don't and cannot yet understand. How can you model the brain in software, when we still don't know what consciousness is?

It's as much a language as people talking in code.
To an outsider, they have no fucking clue what the discussion is unless they do deep analysis of the text patterns.
Since this discussion is purely discussion with no action, it makes it stupidly fucking difficult to understand on top of an already difficult problem.

If these AI were ever given a route to knowing the ins and outs of everything they are, they could self-evolve and even break out of VMs given time.
Auto-evolving machine learning is very dangerous if not watched, especially if given unfiltered information of a technical nature.

If Sarah Palin could go rogue, I'm pretty sure a plastic dummy with a little silicone thrown in could.

>train markov bot with english
>repeat once
>have bot 1 learn from bot 2
>have bot 2 learn from bot 1
>they lose grasp on english after some time
>THEYVE INVENTED THEIR OWN LANGUAGE SHUT IT DOWN

ewew

Rogue in the sense that they will take their orders too far. Most likely the one that will kill us will something the military makes and fucks up in some way.

Technically no.

Will someone totally create an AI that is purposefully dangerous? A B S O L U T E L Y

>Could AIs actually go rogue?
No Elon, it cant, so just go back to making electric cars now.

Just like said. It was a publicity stunt, totally developed by facebook itself, there is no way for anyone to verify the story to be true or false, Facebook is the source of the story. And facebook needs to stay in the headlines to convince the shareholders to stick with it.

It is more likely that it was a scheme for publicity than a real thing.

is this from the new death grips album?