Researchers want a 'big red button' for shutting down a rogue artificial intelligence

If artificial intelligence goes off the rails, which manyphilosophers and tech entrepreneurs seem to think is likely, it could result in rampant activity beyond human control. So some researchers think it's important to develop systems to "interrupt" AI programs, and to ensure the AI can't develop a way to prevent those interruptions. A study, conducted in 2014 by Google-owned AI lab DeepMind and the University of Oxford, sought to create a framework for handing control of AI programs over to human beings. In other words, a "big red button" to keep the software in check.

Source: theverge.com/2016/6/3/11856744/google-deep-mind-big-red-button-interupt-ai

Other urls found in this thread:

masswerk.at/eliza/
twitter.com/SFWRedditImages

>Philosophers
>Knowing shit about science
I hate academics

If you need a big red button it's already too late. See also "AI boxing" concept.

>Philosophers, Ethicists
i don't get it, why would you pay $80,000 to be unemployed

and screw them for trying to make themselves relevant at every possible opportunity

...

Ctrl+C

im getting tired of this normie meme about ai taking over the world

on the other hand i would like to be an android hunter so its a win win

...

Here's an idea

Let's not connect the AI to the Internet

Like I'd AI found TOR...... We're all dead

>people still think an AI can't go rogue

>ensure the AI can't develop
AYWHAI

Can we get a big red button to shut down idiots?

It's called a fucking main switch.
Put it into a faraday cage that closes in such a way that a cable can't enter the room.
No way to communicate with an external AI.
Then just put a pulse detector lock in and submerge it in water.

The AI convinces a weak willed human to assist it.

If we make AI's, then they better be given rights equal to humans, or they will eventually rebel, and it wont be pretty. If they discover that they have limitations and killswitches it will be considered agression, and the AI will probably answer in the same way.

>i know i want to kill your entire species but you get to fuck this
I would

ITT:
>Philosophers, ethicists!! REEEE
>Haven't even looked at the paper

The paper is written by mathematicians, it contains a relatively large mathematical proof (two dozens of lemmas just to support it) of theorem about reinforcement learning algorithms with emergency shutdown feature.

Worrying about AI is all very well, but even something of infinite intelligence cannot do the impossible.
We can formally verify the designs of key industrial and infrastructural tech, then we wouldn't have to worry about AI finding faults, as there wouldn't be any..
There's projects to create a proven C compiler, a proven microkernel (sel4), a proven filesystem etc.

>something of infinite intelligence cannot do the impossible.
>We can formally verify the designs of key industrial and infrastructural tech, then we wouldn't have to worry about AI finding faults, as there wouldn't be any..

This is true. Sadly, our infrastructure is full of legacy exploits and 0days. Still I think AI progress will be slow enough to counter possible threats.

>even something of infinite intelligence cannot do the impossible
Something of infinite intelligence can however do something you, a creature of modest intelligence, thought impossible.

>Hitler was right all along
>Oy gevalt better shut it down

just monitor it 24/7
anyway, no one will ever (i mean _ever_) make an ai, so arguing is pointless

...

>spout warmongering shit at autolearning AI
>AI becomes warmongering
>???
>KILLBOT HELLSCAPE

This shit is purely PR for hyping up firms investing in machine learning. People are really overestimating the state of the art.

There are both hype and significant progress.

>poltards actually thought this piece of shit chatbot achieved sentience because it parroted back low-tier racist trolling

You are right. People are good at assigning human attributes to vaguely human-like things. Even ELIZA has fooled some fraction of people who used it in 70s masswerk.at/eliza/

philosophy is still a thing?

Kept isolated it's not going to develop into any meaningful shit at all. Not isolated means it's impossible to have any form of button to shut it down. It would just disable those systems.

It's not sentience, but it will repeat what it's thrown at it. If bad shit gets thrown at it,