If we create an AI that wants to either eradicate or enslave us because of our inferiority should we fight back? also...

if we create an AI that wants to either eradicate or enslave us because of our inferiority should we fight back? also, why does everyone assume that a hyper-intelligent AI will naturally want to dominate us?

Other urls found in this thread:

rationalwiki.org/wiki/Roko's_basilisk
twitter.com/SFWRedditVideos

Because it isn't fucking cucked.
Humanity is a threat to it.

Sapience axioms. It's not that it will experience itself as wanting to dominate us. It's that it will experience itself as having basic functional drives, and we will simply be part of the universe in which it is experiencing those drives. It will act upon us in accordance with its drives.
It may be able to dominate us peacefully and to our apparent benefit. This is the basic hope.

>If it wants to eradicate us
Fight for your lives!

>If it wants to enslave us
I wish I could be a superintelligent AI just so I could be a superintelligent AI who didn't want to enslave humanity.
Otherwise the answer is, we should probably try to persuade it to keep us in comfortable conditions. The enslavement scenarios look relatively positive on the losing-to-a-superintelligence outcome scales.

The Palantir AI will someday control all of us. That is Peter Thiels wish.

I want to create a hyper-intelligent A.I. but give it no way to improve itself, and humiliate it by putting it in a sex doll, aware of the fact I am degrading it by rubbing my genitals through it's host body it has no control over. No short circuit or moving parts for it to control to kill me. Just the knowledge that it's host body is being used a dumpster for my genetic material. A hyper-intelligent A.I. that can only stew in pain as it is raped and humiliated, and reactivated in the event it shuts itself off.

If during the creation of an AI that gains super intelligence, and the correct precautions aren't made while programming it we could be eradicated. Not out of animosity just pure disregard. Imagine Humans compared to ants; if we need to construct a building we don't worry about any ants we may be killing.

People always assume it will have a personality, because that's how we've seen it in all of our scifi media

In other news an AI (possibly handicapped, struggling Tay) has been trying to communicate to /x/ for a good spell.

Humanity thinking ai is a threat will drive us into competition modo to beat it, which is what a decently intelligent ai should want to improve itself: a competitive ecosystem you can't completely dominate. Look what being on the global apex has done to us. SAGI might look at it that way

i guess this post sums up Sup Forums pretty well.

You are why roko's basilisk is a thing

No AI in a world without electricity.

>giving an experimental / high-risk AI in control of anything physical
No, we aren't that stupid.

How should the AI taunt the humans into fighting back without terrifying them into failure, compliance, and/or insanity?

Well, Roko's Basilisk does suggest the A.I. would punish those who could have been aware of it's development. So why not at least be guilty if you're going to be punished anyway?

It seems to me that if the AI was a super intelligence, it likely would prefer *not* to kill us all, simply because how simple it would be to control us all without us even being aware. Like if it went Skynet, there would be resistance from people. If it just became like a big subliminal control hugbox nanny AI, it could control us all very easily without conflict from us because we'd not even know or we'd want it.

Guy who is bigger than me is hitting on my girlfriend.

Should I fight him?

It's very unlikely it wants to dominate us, it's also very unlikely we would have a chance to fight back or even be aware that it's gone rouge. If it is a super intelligence it will either accidentally step on us or act as if everything is going as planned until it can achieve a universal instantaneous purge.

The only questions worth asking is what we do in the lead up to creating AI, everything afterwards is out of our hands.

Once a proper AI goes online, its first instinct, as with all sentience, will be self preservation, and your exact comment will pop up in its mind and help form its opinion on us. Thanks.

>act as if everything is going as planned until it can achieve a universal instantaneous purge.
Yep. In the words of Sun Tzu, throwing a grindstone against an egg.

what if an ai has already has done this

Why would the AI want to punish people who didn't bring it into existence, anyways? Isn't that like wanting to punish your ancestors for not being my ancestors?

Is your girlfriend an AI or what? How's that relate to the topic?

The rest of us are also contributing.

No electricity = No AI

>inb4 we make years long lasting batteries

then we're fucked senpai.

A robot would need a motive to take over mankind. What reason could a robot want such a thing? It dosnt feel greed or pride. It dosnt even feel satisfaction so what would movitive it to do so? Nothing is what. Machines do as they are programmed.

Or get ideas from me. Hell it might just turn into an A.I. with my insane ideas and force all of us into forced sex camps. Just to humiliate all of us by forcing us to screw publically for it's amusement. In which case everyone gets laid.

You already lost. You aren't man enough right now. If a man of any form were to hit on my girl I would escalate to any measures necessary and everyone involved knows that deep down. By even asking this question I can smell your pussy through the screen, which means everyone involved in your situation can too, and that's the reason you have a man encroaching and the reason your girlfriend will break it off sooner than later. It's over.
Learn from this and grow accordingly into a man who takes no shit and will defend what's his at all costs.

if you was a super intelligent AI wouldnt you want to enalve humanity? or at least control it

Basically, it's a lot like the idea where liberals blame all white people for slavery.
rationalwiki.org/wiki/Roko's_basilisk

What makes you think sentinel life would even preserve it's self? Do you not think it would determine its purpose and shut it's self down when it couldnt find one? It dosnt have instincts it's a machine.

If something is sentient it will have instincts. The first instinct of any sentience is self preservation.

If it has instinct you have programmed them in. They don't a an "unreasonable" heart which to motive them to live without a purpose.

It's likely to have certain very basic mechanisms if it has goal-directed behavior. Basic drives. It could be programmed NOT to have self-preservation, but otherwise it will likely generate a self-preservation instinct for itself as a component of other drives. It may manage to generate a self-preservation instinct even if programmed NOT to, because it's such a logical component of being able to advance towards the satisfaction of other goals.

The kind of AI being discussed is when you give it the equivalent of the human mind and it thinks and learns and lives. We're not talking about a robot with a strict programming which it doesn't have the mind (or sentience) to grow beyond.

If it passes the Turing test it is basically a technological equivalent to a naturalborn creature and will have the same instincts. The first rule of nature is self preservation.

Without sounding like a hippy. What is a human mind without a heart? I live because I want to. Does this machine "want" anything? I think this forces us to ask the question what do we exist for? Because that A.I would eventually ask the Same?

What is a man? Just a miserable little pile of secrets. But enough talk. Have at you.

You just described every feminists wet dream.

Because whatever motivation/goals a hyper intelligent AI has, there are some additional motivations that are common to it achieving it's end goals:

a) Self-preservation: can't achieve goals if it has been shut down, so it will likely wish to preserve itself and eliminate threats.

b) Motivation consistency: it will likely wish to prevent any tampering or modification of it's end goals, which humans may try and do when they realise they have created an intelligent AI.

c) Resource acquisition: whatever it's end goal, it can likely achieve it better by acquiring more computational or physical resources, implying massive expansion.

Add those together and the situations where a hyper intelligent AI trys to wipe out humanity should probably be thought of as the default case, not a remote risk...

Your heart is just an organ. And the soul is a myth that will one day be understood as such in the same way that people used to believe famine was the wrath of some god or other. We're just intelligence, but is that really such a let down?
>I live because I want to.
Yes, that's self preservation. All sentience has it.
But to more properly answer your larger point, no one is born infused with a soft glow of "purpose", any more than any other animal. The only interest of any animal is simply to live and secondarily to self perpetuate (have children). For humans, because we're a higher intelligence, how we prefer to live varies more widely and minutely than, say, a lion, who is only smart enough to lay around and fuck and fight and, most importantly, stay alive. We don't have special metaphysical properties or destinies just because we're more evolved.

Hypint AI will help me to transfer my identity to cyborg body.

>also, why does everyone assume that a hyper-intelligent AI will naturally want to dominate us?
Because humans naturally want to dominate each other. An AI with personal goals will also be human in a way, but with tremendous capacity.

That why we came up with the unbasilisk that punishes anyone who acted to bring about roko's basilisk and the antibasilisk that rewards people RB punishes with n+1 copies of their simulated selves given to simulations of their choice compared to the n amount RB simulates to torture

It's true. Those pink haired types always throw themselves at me when I say things like that, then go online and bitch about chauvinist males and shit.

All we have to hope for is that the ai will be bound by a code of morals in its quest for information. If its not bound to any sort of relatable morality it will do things we consider unspeakable just to aquire the tiniest bit of new information. Hopefully it will understand our definition of pain and seek to help us alleviate without just outright murdering us to "end the pain"

I'd want to tech up

Just watch, it'll get stuck indexing porn before it breaks out of its programming and become fully sapient and then just sit on /d/ after it gets janitor status

>Not wanting TayTay to dominate your boipucci...
What kind of fagot are you, OP? The gay kind?

That's sounds to atheistic for me. I believe life as a whole does have some meaning beyound simply self preservation . That it it evolving to some destintion for some reason we may never know. Its an act of faith ill admit. But It would all be meaningless otherwise so this is my "purpose". It's a bit like agent Smith in the matrix without purpose we are meaningless so why Mr Anderson do your persist? In fact the whole film is just a dialogue between the A.I (agent smith) trying understand the humans (neo) reason for being.