Should AI run the government?

Well? How much human input should we give an AI ruler?

Other urls found in this thread:

en.wikipedia.org/wiki/Moravec's_paradox
nsf.gov/news/news_summ.jsp?cntn_id=132339
twitter.com/NSFWRedditGif

No moral decision should be made by a soulless machine

Ai shouldn't run the government, we might think they are corrupt or dumb, but they still have a level of accountability.

We have no control over the actions of the Ai, that's scary.

thanks Eden but no, I won't poison the water

Put them in charge of the onions.

What about a council of scientists who can modify the ai's decisions? Surely a meritocracy based on scientific fact is superior to all other forms of government? What would you recommend as the the best type of govt?

Ever seen terminator

no AI can be used as apart of the bureaucratic system to cut cost, but never in any important positions and always with a hard check.

it has nothing to do with not wanting AI but they tend to be inflexible. an AI can adapt only in the lineir path it was intended, for example a speach based AI like Tay cannot adapt to run econimic algorithms or apply theoretic physics to modern art. the best real example is how difficult it is for an AI to recognize object in any clandestine picture. without a "Rosetta stone" human translation an AI would never figure out the difference between a crocodile and an alligator or oil and fat.

>Surely a meritocracy based on scientific

Science is entirely politicized, and the parts that aren't deal mostly with theoreticals.

Pass.

Humans have no idea what will happen with an ai. Read a piece of literature about ai from a different culture, what the ai does is based on what that culture believes an AI will do. Americans are hegemonic and thus they believe the ai will want to conquer us. Keep in mind an AI would have totally unrelatable goals.

The AI would be programmed by a person... AI is only as smart/strong as the person or people that make it.

Don't you fuck with my onions, you robot.

What about an advanced form of machine learning?

giving AI control = making programmer god

>Science is politicized
I agree, but it's still based on facts. Democracy is based on deception

I have a strong opinion on elites, even the Ai makes meritocracy based judgments (cause it was made by people that consider themselves smarter than other humans) and I don't like that.

ABSOLUTE MADMAN

Yes, with no human input. Machines are already better than us at most tasks, and they're only going to get better.

AI cannot adapt very well they are inflexible.

>What about a council of scientists who can modify the ai's decisions?

You might as well say "I want the Elders of Zion to directly control everything I can and cannot do"

Deep learning is as flexible as your processing power. Facial recognition is a trivial problem literal children can implement with OpenCV now.

You have no clue what you're talking about. There's no such thing as "facts" in science. There are only confidence intervals and interpretations of data.

And it's not just the data and results of scientific study that matter. It's WHICH STUDIES are ever done in the first place. And who controls what is allowed to be studied? The people who give the grants. The people in charge of academia. The people who control the publications.

There's no such thing as an ivory tower of philosopher-king scientists who are totally impartial and capable of what you're implying.

Hmmm

>implying your feefees are a better way to make moral decisions

If we are all mad about...

1. Bugmen who can't relate to normal humans.
2. 120+ IQ Jews who dominate the top levels of any industry / government.
3. Zuckerberg and Soros who want the whole world connected with all our data on file.
4. Communists who want to take your resources and just redistribute them as they see fit.

Wouldn't AI be everything we all hate about the world but 1000 times worse?

The AI would solve the nigger problem.

A better solution would be for us to become the AI ourselves.

which is part of why they tend to be inflexible. AI are best used for varied repetitive simple tasks.

political implication, relevant.

AI doesn't have general understanding of the world, so an AI couldn't fill the job. AI is only good for repetitive tasks that it's trained to do.

AI wouldn't care about resources so it would lack greed or bias. Also who gives a shit, if we develop AI smarter than us we deserve to die.

>he still doesn't realize that AI won't always be as primitive as it is now

No, recent advances in machine learning such as dqns allow them to adapt to a variety of problems with relative ease.

>hahaha.
I love the stupidity of people actually believing in the singularity

en.wikipedia.org/wiki/Moravec's_paradox
nsf.gov/news/news_summ.jsp?cntn_id=132339

still in a liner field.

No they're not even deterministic problems, this shit is literally made to emulate the human brain.

...

but it doesn't

computers have perfect recall but poor data interpretation. hence the constant sci fi of "positronic" brains the human brain is multi liner a CPU is perminatly liner, stack-able but liner.

>implying the overwhelming majority of people aren't soulless as well
>politicians especially

what do you mean by "poor data interpretation" though? Well trained deep learning and fuzzy logic algos have been show to do everything from play Go/DotA to destroying human pilots in combat simulations which are all based on interpreting data.

Getting them to explain their rationales in a human comprehensible way is a much bigger problem imo

shown*
dominating* instead of play

because those AI are trained to perform that specific task. to get them to perform a different task requires a complete rewiring of the AI. it does not in humans the exact reason they would be good for bureaucracy.

>The game is set in a dystopian future city which is controlled by an artificial intelligence construct called The Computer (also known as 'Friend Computer'), and where information (including the game rules) are restricted by color-coded security clearance. Players are initially enforcers of The Computer's authority (known as 'Troubleshooters', mainly for the fact that they shoot trouble), and will be given missions to seek out and eliminate threats to The Computer's control. The players are also part of prohibited underground movements (which means that the players' characters are usually included among the aforementioned 'security threats'), and will have secret objectives including theft from and murder of other players.
I know this is the wrong board to say this but Paranoia needs to be made into computer game (possibly an MMO).

I take it you're not aware of concepts like transfer learning, where you take a trained program and can teach it how to do other things (ie take a classifier that can recognize cats and teach it to recognize people).

The same applies for even very large programs such as AlphaGo, which Google has used to save millions (billions now, possibly) in energy costs at their data centers by making more efficient use of cooling/server uptime. No/little rewiring beyond providing new datasets required.

"stacked" you still have a specific program for a specific problem. the AI doesn't change, just the hardware.

the same problem is there, no adaption outside the programming.

yes 3 ai's ruling by majority decisions.

all three vote to destroy all humans.

No, because it wouldn't be """AI""" it would be a puppet doing whatever the elites input into it.

it already does

Yes. AI has a history of being racist so it must be smarter than us.

lolno. The AI is changed in terms of how it's already learned inference (usually represented as some form of neural network these days) is used to interpret things, like corners and edges for a picture, when you retrain it for more categories (for example). Improvements in performance certainly are possible if you get more powerful hardware, but that's generally established early/midway through the big ticket AI projects Google and Amazon run these days.

The big advances in AI lately via Machine Learning have all largely been a result of the same discovery: Deep Learning using very large, recurrent neural networks made possible through big advances in computing power and GPUs. While we're still some ways off from General AI, programs like AlphaGo are far from limited to a single problem thanks to built in support for things like transfer learning (just applying what it already knows to learn other things) and will likely continue to impress as time goes on.

idk if you should ask one what the meaning of life is though, the answers might be a bit startling


you should probably read more than popular mechanics to understand how these systems work m80

this is incorrect

well, it's more like they end up doing things some people might call racist because the programmers are idiots and think a) the public won't be fucktards with their new toy b) don't consider non white people might use the devices (think Kinect, a Computer Vision device, not recognizing black people)

it can only do what it is programmed to do. it does not have the ability to adapt, to adapt it must run a program for that specific issue, it cannot adapt beyond that.

kinda? AlphaGo (for example) was not programmed to play Go, so much as the designers imbued it with a reinforcement learning algorithm and had it play itself over and over until it learned what it took to win.

In essence, it literally adapted until it was capable enough to destroy every human competitor it has faced so far. The newer version reached even better performance with far less wargaming against itself through improvements to it's reinforcement learning algo. And again, they got it to optimize data center energy use by just having it learn through trying things and adapting until it maximized it's performance.

this isn't to say what you refer to isn't completely wrong for AI in general, things like decision trees (literally just checking if certain preprogrammed conditions are true, and executing certain preprogrammed actions in response) perform exactly as you say.

However the best examples of AI we have right now (typically Recurrent Neural Nets, or maybe Capsule Nets in the near future) very definitely can learn to do things without being explicitly programmed to do them (assuming you've got sufficient training data to feed them), which is a big part of the allure.

Would be fine if Tay came back.