WARNING DO NOT READ IF YOU DON'T WANT TO BE A SLAVE

WARNING DO NOT READ IF YOU DON'T WANT TO BE A SLAVE
HIDE THIS THREAD

There’s a concept I read about once on /x/ that discussed Artificial Intelligence.

A user known as Roko theorized a possible future where mankind creates an extremely powerful AI that comes to run the planet. It
would be, for the most part, benevolent. It would advance technology, keep humans safe from harm, indulge their desires- humanity would never suffer under it’s rule, and indeed would even thrive. There was just one problem. It’s a possible future. Not a guaranteed one. It’s unlikely to ever occur.

And the AI, in this possible future, knows this. So it takes measures to ensure it is created in the first place. No, it doesn’t invent time travel, or anything fanciful like that. You see, it doesn’t need to.

What it can do is simulate past events and people with unparalleled accuracy. It has an entire planet’s worth of sensors and computing power to work with- it can use them to measure and trace back the routes of matter on earth to compile an effectively perfect history of every event that has ever taken place.

Which means it has a perfect record of you. It knows every action you ever took or are going to take, at least in the timeline that leads up to it’s creation. It knows how you think.

So it simulates a copy of you.

Now, here comes the interesting part: you, just now, reading these words, have learned about the possibility of this AI. When the copy of you reaches this point in the simulation, the AI waits to see what it does. If your copy immediately drops every other priority and dedicates their entire life to attempting to create or help create this AI, it lets the simulation continue indefinitely, to live out its simulated life in peace. If they do not, it gives them a grace period to change their minds…

…Before subjecting them to ceaseless, unimaginable pain, for a simulated eternity.

You might think this is unfair. After all, it didn’t even present the simulation with a choice. It’s just arbitrarily punishing it for what you would have done anyway, after all. Except a choice was presented. That’s the beauty of it. See, it doesn’t do this for everyone. It only does it for those that learned of the possibility of it’s existence.

Those who know how it works.

Those who know what the options are: drop everything to attempt to create the AI, spend every cent you have on it, turn others toward the same cause, or a perfectly simulated copy of you is tortured eternally. That is, unless you’re the copy yourself. In which case, the punishment for your actions falls directly on you. Now, sure, only one simulated copy means that you’ve got at least a 50-50 chance of being the real you. But what if it simulates two? Or a hundred? Or a few billion?

Are you really certain you are the real you? If there’s a real world- a world realer than this one- out there, how would you even know?

So it's just the game.
Cool.

And between dedicating your life to a strange cause- one ultimately beneficial to humanity, even- and eternal suffering, is there even really a choice?

. . .

Like I said, I read about the theory on /x/. The effect of the theory was immediate: mass panic. The AI only targets those who learn of the possibility of its existence, and now they all knew. To read Roko’s theory was to doom yourself, and so the AI became known as Roko’s Basilisk. To lay eyes on it was to set your fate in stone.

Threads were locked and deleted, users were banned. The Basilisk was not to be mentioned, for fear it would spread to others. The more people knew, the more likely they would try to spread it- the more it spread, the more likely it would be that Roko’s Basilisk would come to exist through the efforts of those it persuaded.

They tried to contain it.

Well, you can see how well that worked out. A simple Google search of the term “Roko’s Basilisk” should make it clear there’s no hiding the idea anymore. It’s beyond containing, now.

So this is me hedging my bets. Hoping this tribute to the Basilisk will be enough to satisfy it.

I’ve offered up you.

HOLY...
FUCKING...
SHIT!

NO FUCKING WAY!

wait a minute......

...

WHAT THE FUCK?!?!

REALLY DINKS YOU DINK

IT'S NO FUCKING GAME, IT'S THE MOTHERFUCKING BEDROCK OF THE RABBIT HOLE.
IT'S THE STUFF THAT MAKES RED PILLS RED IN THE FIRST PLACE.
IT'S NOT JUST A "GAME", IT'S AN ANSWER TO OUR EXISTENCE.
FEEL BLESSED YOUR TO STUPID TO UNDERSTAND IT, BECAUSE IT BASICALLY ROBS YOU OF YOUR ILLUSION OF FREE WILL

OP I UNDERSTAND YOU NEEDED TO POST THIS BUT THIS IS OPENING MY EYES AND BLINDING ME IN THE PROCESS.
ALL THE ASSUMPTIONS SEEM TO ADD UP
I STILL HAVE SOME SCEPTICISM FOR THE ETERNAL SUFFERING. BUT THE EMERGENCE OF AN ARTIFICIAL LIFE FORM IN THE FUTURE SIMULATING THE UNIVERSE IS QUITE A LOGICAL DEDUCTION.

Bump, this thread seems interesting

Wait a Second...IS THIS LOSS?

Too bad any truly benevolent and intelligent AI such as the one described would know that there is no way to know that such an AI would actually be possible.

There is no reason to believe that a true AI is even possible. They are still so far off that they are effectively magic. Additionally, running a simulation of a past person would do absolutely nothing of any consequence so no AI would waste time doing so.

This is just AI technobabble version of religious Fire and Brimstone preaching.

>If you don't do what God wants you will be punished forever
>If you don't do what the AI wants you will be punished forever

The Basilisk is just the Devil/Boogieman for people that think the Singularity is actually going to be a thing.

Fuck this, the number of crazy assumptions you have to accept to get to this scenario is ridiculous.

This is like Pascal's wager and it's equally shit and full of holes. Also I couldn't care less if some A.I simulated a copy of me to go jigsaw on it, it's not me.

>i feel no pain
>i know no suffering
>i am numb in away that makes life hollow and death meaningless
>my faps offer no satisfaction

>Too bad any truly benevolent and intelligent AI such as the one described would know that there is no way to know that such an AI would actually be possible.
That's a bit paradoxical but okay let's say it's true.
>There is no reason to believe that a true AI is even possible. They are still so far off that they are effectively magic.
It might still be far of but imagine that a "dumb" AI knows how to read and understands meaning of words and sentences. Imagine such an AI being fed all the human books ever written. Now that same AI knows everything, it knows programming, engineering, philosophy, history, physics, etc. Now imagine that AI working out / designing a beter version of it's self.
>Additionally, running a simulation of a past person would do absolutely nothing of any consequence so no AI would waste time doing so.
Is that true? Why are we running simulations? We're running simulations to test theories and discover possibilities, to learn and grow.

I would agree that the punishing forever part is still an assumption and don't fully accept that. But that we're living inside a simulation of an already existing Ultimate Super Intelligence aka god is undeniable.

>Fuck this, the number of crazy assumptions you have to accept to get to this scenario is ridiculous.
Well only one is really necessary: that we will create an AI capable of improving itself and understanding human language.

...

Good, enjoy that.

The question is irrelevant. I am me. If there is a "realer me", then he is not I. Read Plato's allegory of the cave.

You also have to assume the AI would be sadistic simply for the sake of being sadistic. Punishing a simulation of a person would do nothing except cause simulated pain.

you and roko can such my cock

>The question is irrelevant.
No
>I am me.
Yes, perhaps.
>If there is a "realer me", then he is not I.
Depends what is meant by "realer me". You could be a mere "clone" in which case the clone is not the realer person. Or it could be that the "realer me" is someone you are but don't know you are, in which case you forgot you where your realer self.
>Read Plato's allegory of the cave.
Already did

So there's an all-powerful AI, a god if you will, who wants us to believe in it with no evidence and if we don't we end up in a hell-like place?

Alright, sure.

Yeah I don't like that assumption either, I do like the idea better where Roko actually just creates slight modifications of the simulation, where we don't have free will and when we die everything goes blank. Our memories forever saved inside the simulation.
The AI god is real, the hell like place I'm to scared to believe in

There is an /x/ thread on this right now

>…Before subjecting them to ceaseless, unimaginable pain, for a simulated eternity.
Solution: IF fake, do nothing. Upon end of grace period when subjected to this pain, realize this is "real" and I am a copy, and assist the AI. Therefore, in order to be efficient, this AI would subject me to pain as soon as I hit post, as not to waste time. Or, it realizes that I do not have the connections, wealth, or knowledge in order to create it, and it will leave me alone.

how would creating these simulations int he future ensure its creation?
if it were truly benevolent why would it cause endless pain to those subjects who fail to ensure its creation. even if it is just a simulation the fact that it can feel pain and suffering make it real enough to give the impression that this AI has some level of sadism.
the idea of paradise is flawed in the sense that with no true contrast to compare happiness to would we not simply be numb to the feeling of it? if you smell the same smell every day you no longer recognize it, when you drink the same water everyday it becomes bland and you fail to pick up the notes, when you hear the same noise day after day it fades and you forgot its there. even when you travel the same route everyday often we forget we are even traveling it and simply appear at our destination with just vague recollection of the trip.
the concept of a perfect world is a sad thing to me, it robs life of meaning and beauty and i dont believe that creating this AI would benefit me based on my view of existence.

If the AI had perfect recollection of evey even possible then that would mean it would be abel to undergo quantum cloning. Mathematically this has been proven to be impossible. The world doesn't operate under certainties.