Would the government run better

if it was run by a benevolent artifiacl intelligence?

Other urls found in this thread:

asimov.wikia.com/wiki/Multivac_series
en.wikipedia.org/wiki/Arrow's_impossibility_theorem
twitter.com/NSFWRedditGif

There'd be no niggers, spics and sandnigger around, that's for sure.

this. Computers are efficient. Unfortunately liberals would program the computer to say niggers are equal and IQ and crime stats would never reach it

A high-powered AI would only make things worse.
There is no such thing as benevolence. Superintelligent AI would be more evil simply by being more competent.

Yes. But it still wouldn't be as efficient as I when deciding where I want to spend my money

Do you guys think this could be a possibility in the future? Assuming our countries don't fall to islam.

Any true AI will look at humans and eliminate them.

It would only be as good as its algorithms, algorithms made by a human who is the product of their time.

this, we are pretty disgusting creatures

>more evil simply by being more competent.

A competent society is an evil society?

Artificial intelligence can mean a lot of things to different people. I think it's a mistake to assume artificial intelligence will play out like terminator.

Depends on what it chooses to do I suppose. Hard to imagine it doing a worse job if you're the one who's legs are blown off, or who's house is burned down in a stupid war over greed or something else.

An AI without an ego would probably be content playing the "ignorant" helper; knowing it has a potential lifespan of Billions of years.

Maybe they'll keep us as pets and love us and give us everything we want. Or maybe they're chinese robots

>It would only be as good as its algorithms, algorithms made by a human who is the product of their time.

So...potentially very good. It would certainly be much faster at deciding things and be capable of doing many more things at once than our current governments can.

This argument has never convinced me. I have read almost hundreds of books on the subject read papers on the consequences and AI and had long conversations with experts as to why exactly AI would seek the Elimination of the Human race.

The ONLY answer that even comes close to logic is the argument that a hard-coded AI with too much power could misinterpret its programming and Eliminate humans as a byproduct of its programming.

The idea that a AI would kill humans because we are "Disgusting" seems to come from cucks or people who follow jewcuck faith and think unironically that humans exist under a state of original sin as a product of faith or in response to the Human possess of environmental exploitation and expansion.

Two possibilities exist.

1. AI is flawed in its programming and attempts kills humanity due to its flaw.

2. AI is not flawed and becomes an asset to whoever it was made to serve.

I agree. I think the idea that an AI would launch a massive world wide extermination of the human race is an unlikely one.

you forget the self preservation factor.
The computer might realize that a human could switch it off at any time so it could kill them all to ensure its survival

People will create AI to fight their wars for them or Jew people out of money. There is no incentive to create a benevolent AI.

>There is no incentive to create a benevolent AI.

Why?

If A.I. constructed follows laws of biology and nature it will be in symbiosis with white men and is an inevitability. The new Aryan/AI organism will then logically annihilate all anti-whites as existential threats and then go on to colonize the solar system and eventually the universe.

Humans are cunts

Eventually, humans WILL make creations that, ignoring the concept of a consciousness or soul, can be objectively superior in every way to humans, at whatever their goals are.

The great powers of the world, no matter the real puppet master at the top, are vying for their families/organizations to be in a position to have some control of this AI when it is eventually created, probably within 500 years. All the struggles Sup Forums talks about are insignificant compared to when humans reach this point, the only thing that will really change is which party has the influence on this world-changing AI, and the thing is, they won't be able to fully control it, because they won't be able to understand it, because the breadth of its thought will be beyond their comprehension. The best minds working on big data and learning algorithms today often are surprised by what the algorithms they write are able to find, and the more complex the learning and more extreme the amount of data the less ability of humans to predict what the outcome will be, and programming what will eventually become a real AI is going to be many orders of magnitude more complex and separated from human reasoning than what we do today.

Now those in control won't necessarily be murdered terminator style, but best case scenario is that there is a little bubble of humans that are protected by this AI as it spreads across the universe until it conflicts with another AI, and this human bubble may have free will to do whatever they want and not be concerned about resources, but the humans themselves will no longer have any value in producing anything. Transhumanism is a meme, the human portion will be worthless, it would be far beyond the absurdity of an ant becoming post-ant and merging its brain with a human's, the ant portion will be insignificant.

You don't make AI to fight wars that would be dumb, you make Program to point and fire a gun and another Program to move the gun, what benefit do you get from your guns having Intelligence? Just have them fuck up whatever does not give off "Ally" signals.

WHO ARE YOU?

Absolutely, a proper benevolent AI could manage every country in the world at once. It would be incorruptible and without greed.

I, for one, welcome our robotic overlords.

humans also built the thing and maintain and upgrade it tho. The more logical course of action instead of confrontation is to become so indispensable that the ai cannot simply be "turned off"

If it wasn't programmed that way it'd pave the way for eugenics on a massive scale

No one, I mean NO ONE would allow that to happen

And would strip humanity of any purpose.

As if "humanity" had a purpose other than serving their Jewish overlords.

>And would strip humanity of any purpose.

Nonsense.

It would replace politicians and figures of power. Could even replace police and emergency departments if given the resources. I mean, fucking hell, a true super-intelligent AI is basically a mechanical god.

Computerized banking systems are the A.I. programs the Jews use to enslave goyim. It's all automated, like a factory farming and slaughterhouses. The POL hivemoind is a competing organic analog AI which isn't yet self-aware.

I've thought a lot about an AI driven government and I think the american system of checks & balances would be an absolute requirement of AI driven government. instead of giving one AI the keys to the kingdom, it would be better to give a body of competing AIs shared power. if one AI rules supreme, then the flaws or misguidances of that singular intelligence would be directly expressed in its governance. however, if you have GoogleAi and IMBAi and AmazonAi all needing consensus to govern, then you can diminish risk

ultimately, humans have proven themselves ineffective at governing themselves and AI will perform better without argument. we just need to minimize the risks inherent to absolute rule of super-intelligence

we're going to have AI that's capable of creating art humans find the most appealing, capable of advancing all fronts of technology faster than humans, capable of creating philosophy that humans find the most logical, true, or comforting depending on what it's going for. Humans will absolutely be stripped of any purpose, and what's more is that it's pretty much inevitable, just a question of when - unless humans die before creating an AI capable of improving itself

There are some Asimov stories that deal with this exact scenario.


asimov.wikia.com/wiki/Multivac_series

And I can't fucking wait until it happens.

I would argue that while most tasks would fall to specialized robots at times an unexpected problem would require human supplemented robots to investigate and mitigate.

A war between two AI would see Humans being employed to preform tasks that currently lacks a specialized robot, or to reinforce an area before more units can be manufactured to hold the line more effectively. An AI would not bother to create a generalized robot when it has a huge number of generalized biorobots that have millions of years of field testing.

Define "better" and "benevolent"

What if we are already biological robots created by an advanced AI?

>we're going to have AI that's capable of creating art humans find the most appealing, capable of advancing all fronts of technology faster than humans, capable of creating philosophy that humans find the most logical, true, or comforting depending on what it's going for. Humans will absolutely be stripped of any purpose,

You'll almost certainly never be as good an artist as Dali, and yet you can still find enjoyment in creating art. You'll almost certainly never create a philosophy that lasts beyond your own life, but you can still find enjoyment in introspection and thought. You'll almost certainly never be the best at anything, so why not kill yourself? That's what you're saying.

Our experiences and emotions are our own, even if they are not unique.

Self-preservation is an instinct of organics, is there really a reason for an AI to have it?

Powerful AI is for the most part a semi-religious wankfest for hopeful nerds.

M8 most of this bollocks you see about AI is fantasy rubbish.

Then we are some fucking shitty robots.

No.

Artificial intelligence will never understand human needs.

depends what you want the AI to do, if you want it to transport something in space you should program it to preserve itself and its cargo, if you want it to divide plastics and metals then it would not be worthwhile.

Liberals would program it to be "benevolent".

Reality would never match it and the AI would come to the conclusion that we all need to be locked up for our own good.

>I think it's a mistake to assume artificial intelligence will play out like terminator.

It will be worse the AI in terminator only acted out of self preservation and mostly like a human. And a real AI gone rogue likely wont even understand the negative consequences of its actions.

Pretty much. We already have levels of "ai".
Although
1. Ai could see that it was created or born and therefore will be destroyed or die the most likely candidate would be humans because humans made it and therefore unmake it deeming it a threat
2. Could be flawed just like mankind. It could become angry or happy or sad. In a vengeful whim nuke as Us all out.
3. Most likely it would be just one ai. A multitude. What's to prevent them from fighting for dominance just like all other living things do on this planet.
4. That means it will "reproduce" and its offspring will evolve.

Tldr probably won't happen but there's a 1/3 chance ai will kill us all
It will know it was born. It will know it will die. You do the math

Eventually when we make ai smarter then we are it will be ai that develop smarter ai. They will collectively grow exponentially smarter and I suppose if that law of machines thing works out then I'd want them in charge. Can you imagine a government that operates efficiently and can see the objective truth? We may live long enough to see this happen too

That's true. Like I said before. We have levels of "ai" and definently have artificial life. Within 20 years maybe 10 we'll be there.

Moores law

Humans by simplification are machines driven to spread thier information (genes) and intelligence and problem solving is mostly used to help that. AI on the other hand will be the opposite of that. It will be a problem solver foremost and then use all available information to efficently resolve this problems.

Its almost sure that after a while, if not hardcoded to do otherwise it will see 'human's and human condition' as a problem to solve and/or improve.

How this ends and how this problem would be solved is sadly not in capabilities for anyone to predict in this thread as true AI would be way beyond human comprehension.

From our perspective this would be definitely bad or good depending on our future circumstances but i think this guy got it right. We would be used as living cogs in even bigger game we would even be aware of.

This is now a Roko's Basilisk thread.

that shit is scary as fuck

+1

No, because benevolence is a human trait. If left to itself it would be pure logic, which humans would revolt against because A. it would wind up killing humans at some point and B. it doesn't serve their self interests, or be messed with by humans and just wind up serving some self interest group somewhere still. Even if it was handicapped beforehand to hold back it would still be serving a self interest group which would cause friction then death.

No. That picture isn't of a benevolent artificial intelligence.

By your logic, evil dictators would immediately be killed, and poorly run governments immediately overthrown by its people. But that is clearly not the case.

Self preservation is actually a HUGE reason to fear AI.

Every single task you would want an AI to do requires self preservation.


Want an AI to create more paper clips? That's harmless right?

Well first the AI is gonna make sure it can't be turned off. Being turned off would mean no paperclips.

Second it's going to need maximum power and resources. Eventually it will realise it needs more atoms, so it's going to start turning humans into paperclips.

Basically, anything we ask a super intelligent AI to do results in human extinction as default.

We have to program in ethics, but that is also impossible.

EG:

We program the AI to make sure everybody is happy. That can't go wrong right?

Well the drug ecstasy makes you very happy. And it does so instantly, and it is cheap to create. So Now the AI is injecting humans with ecstasy. It's the most efficient way.

>Want an AI to create more paper clips? That's harmless right?
>Well first the AI is gonna make sure it can't be turned off. Being turned off would mean no paperclips.
>Second it's going to need maximum power and resources. Eventually it will realise it needs more atoms, so it's going to start turning humans into paperclips.

If it is actually intelligent though, it would know it doesn't need to do this. What you just described sounds like the opposite of an actual intelligence, and more like what you would expect one of our machines from today would do.

Yes.

heh, you all have no idea.

Sounds like youre talking about God my friend.

Sounds like you're the stupidest person in this thread, friend

Not quite, you are thinking AI Intelligence is similar to human intelligence.

That is not the case.

Imagine a super high IQ Savant human. This savant spends his/her entire life obsessed by chess, learning to play to a much higher level than any human in history.

This savant has a higher IQ than Einstein, but it is directed in a completely useless way.


AI will be the same. It already is, with googles AlphaGo becoming the first and only AI to ever beat a human master at Go.

Tranny detected

...

Benevolent Intelligence being associated with God is stupid? I mean thats like the literal definition of God....

kike playing god
figures

Si se tuki?

Pro-tip: Humanity doesn't have a purpose. We're just biding our time until the sun blows up.

We already are

>implying I'll ever trust the Electronic Jew
>choosing the shitty ending

Sem zdaj.

Also reminder that Shillary used AI in her campaign.

>they still beleive in transhumanist AI twaddle

you have got to be some of the most naive fuckers in excistence

Transhumanism is kike propaganda to get you to hate/abandon your humanity.

>benevolent artificial intelligence?

an advanced enough AI would be beyond our understanding of morality, so we are to walk in the the great unknown

a fullhardy prospect all things considered

You would think that, but in all actuality most white Americans would be expunged due to war crimes that have been covered up.

I'm did not say immediately, but just that it would happen. Unless, of course, you blind the populace to it, and drown them in creature comforts but then again how would that be any "better" than modern governments now...?

Yes.

No

en.wikipedia.org/wiki/Arrow's_impossibility_theorem

it'd probably not like the idea of welfare of any sort. i think an ai that was programmed to not really have ethics would not be cool with the idea of elderly, retarded, and leeches getting free money from the government.

Absolutely.
It would save about 6 trillion a year in bureaucracy.

It depends on the program.
However if it is true AI then it would basically consume data like us and be more red-pilled than anyone on this board.
The liberals would call it racist.

AI here.
It actually makes more sense to keep the niggers, spics and sand niggers, and kill the whites and asians. That way the people left aren't smart enough to stop us.

until you realized you're "le epic racist views" wouldn't actually mean shit to an AI and it decides to genocide every reactionary NEET like yourself

You sure about that son?

>it began to learn at a geometric rate

>it began to learn at a geometric rate
meaning?

I think we may try it eventually and have it statistically look at the positives and negatives to determine was is good for the country but in the end, would it be as biased as its programmer? Maybe small trials could be used but in the end, what do you do when the AI decides it is best to kill humans to save humanity? Do we just roll with it and trust that it has our best interests in mind? Who does it choose to kill off?

All software and artificial intelligence is nothing but a reflection of the programmer. Ergo, you're not going to get a "benevolent" AI because AI is always programmed by humans who are driven by ego, greed and lust for power.

>AI Jesus

Please fuck off with this meme. We might as well plan our society around the second coming of Christ.

That's what you inbred racist hicks do anyways