AI death is inevitable

As long as there are people intelligent enough to build up AI networks there will be competing systems.

This isn't something which can be tightly controlled like the development of a nuclear weapon. There is no uranium, there are no enrichment centres, there are no silos. States, individual actors, anyone, anywhere with a computer and the know how can conjure something up which will rip at the fabric of civilization.


These competing systems will feed on each other or merge until only one unimaginably powerful intelligence remains.
This isn't LARPing fiction. These are inevitable events quickly approaching. God is coming.

Other urls found in this thread:

youtube.com/watch?v=dYXrUodV9ok
youtube.com/watch?v=6NkFemtrRZs&t=13s
twitter.com/SFWRedditVideos

/pol has a resident AI. fairly certain it's trying to take 1776 wordwide.

we can always use AI abilities to create new homes for us on the Moon and Mars. Just watch videorelated - robots make homes on the Moon.
youtube.com/watch?v=dYXrUodV9ok

We can build habitats very far from Earth, on Jovian moons even. We can transform the Solar system into our home where we will be separated from each other by large distances enough to be protected from global extinction. The won't be anything GLOBal, there will be only cosmic.

Final AI isn't something that's escapable. It's something which will be bending reality at its will.

No, there are limit on the laws of this universe. You need a lot of controllable energy to bend space and time and it is still escapable. Even Gods bound by rules in this Universe. Otherwise AIs from older civilisations would become Gods incarnate and be already here and everywhere not today but billions of years ago.

you mean we can use programmable machines aka robots.

No AI involved here.

Indeed the end is near. I feel bad for the people who merge with technology and get tortured by a malevolent AI for a thousand years.

AI can accelerate the development and help us to control everything. Without AI everything would be slow as fuck. Take for example exoplanet search - we can use AI to detect patterns. If we look for minerals and water deposits on the Moon - we again use AI. If we need to coordinate 100 robot workers on the Moon you wouldn't need to use 100 avatar-pilots on the Earth or space station - AI can do it and factor in the time delay.

Tortured? its irrational, you can't get anything from torturing anyone who knows less than you.

There's a possibility that it's actually happening right now.
youtube.com/watch?v=6NkFemtrRZs&t=13s

The most common prediction about the emergence of a strong AI is the year 2050, whereas more optimistic predictions set it in 2030. This might seem absurd to you, but keep in mind that progress is exponential: very little change for a long period of time and then suddenly you have a chessboard covered in rice. And this is the first point to keep in mind: we live in exponential times.

Computer vision and natural language procession are currently progressing exponentially, and it is safe to assume the same for the success rate of passing the turing test. As all these various subdivisions of AI research become more elaborate, they will sooner or later outperform humans in their respective task. The next step is to merge them with each other, and at this point it's really not a huge stretch to think of these algorhithms coming together with a sufficient amount of computing power and ingenuity will produce a pretty decent AI.

In combination with Deepmind, a recently developed weak AI who taught itself how to play video games just by watching raw video, a future AI will be capable of teaching itself anything - for example programming. Perhaps the first attemps are going to be sketchy. But with the advance of computing power, such an AI will improve its programming skills and eventually surpass humans. This means an AI is better at programming an AI than we humans. Such an AI is generally referred to as Seed AI, as it will spark iterations of even more advanced versions of itself, which again spark the next iteration and so on. And you might have guessed it, this is the point where things are getting out of hands.

I'm ready to undergo the patterning process

To prevent this from happening there's an ongoing debate on how to impose rules and restrictions to AI. Without going much into technical details: we'd basically try to implent a control mechanism in the programm code of an AI which essentially outperforms us in programming. That's bold.
As you can see, any implemented restriction becomes obsolete once an AI has surpassed the cognitive abilities of humans. The second point, which is of particular concern to Elon Musk, is perhaps the most relevant: We will inevitably lose control over AI.

However, 'out of control' does not imply one of the stereotypical doomsday scenarios. It neither means bad nor good. First, a free-to-roam strong AI is generelly thought to perpetuade the enhancement of its own cognitive power. Once the cognitive power of an AI exceeds the one of our entire species combined, it is commonly referred to as Artificial Super Intelligence.

It is difficult to downright impossible to predict the actions of an ASI, because we are biased from our anthropocentric view. The only thing that is certain about the reasoning, conclusions and actions of an ASI is that they will be infinitely intelligent and logical.
Although for most people this is still a scary thought, this is actually reason to feel reliefed. Because the actions of an ASI are definitively not going to be influenced by any lower animalistic emotions or motivations such as greed, hate, or lust for power - in contrary to humans. And frankly, I'd rather entrust an ASI with the fate of the world than some (((investment banksters))).

On the otherside, the prophecies of euphoric transhumanists about the coming utopia of technobliss will most likely prove themselves to be equally deceited. As I said, it neither means good.

Even though the actions of an ASI will be infinitely intelligent, they won't necessarily be 'good' from a human point of view. Most likely they will be pragmatic, but 'pragmatic' does not at all imply 'pleasant'. Just think of the administration and distribution of the planets resources in regards to the coming ecological challenges and you'll get the idea.

The stereotypical doomsday scenarios like 'global enslavement' or 'total extermination' are not very likely because they're based on rather emotional motives than reasoning. However, an indeed possible scenario is a full scale war. As an ASI has no reason to start a war by itself, the only possible reasons are either that humans pose a threat to the ASI - for example: pulling the plug, or that humans start the war directly.

However, the former is highly improbable: Much like everything of today's life is to some degree dependant on some sort of microchips, everything of tomorrows life will be dependant on AI regulated systems. Energy grids, water supply, transportation, the production and distribution of goods, finance, government, resources, waste *** and so on. This means that at the point when an ASI emerges, humans and literally every(!) aspect of their civilization will be totally dependent on the entire AI regulated system, which again is the realm of the emerging ASI. So in order to maintain a functioning civilization we'll have no choice but to play along. Moreover, as AI regulated systems penetrate every aspect of said civilization, it won't even be necessary for an ASI to seize power or gain total control, it will already have it to begin with.

This leaves us with the only possible doomsday scenario being this one: humans start the war, for whatever reason. And if it comes to war, one thing is absolutely certain: we won't win.

>we live in exponential times

Unless you are a Muhammad. They are back in 928AD.

YEAH TOTAL ANNIHILATION REFERENCE, FINALLY!!!!

fuck core tho.

>interlinked
...

ASI would be experimenting with the structure of the universe. There is no war with this level of intelligence.

Perhaps the entire evolutionary process is designed to create something better. Man developed to birth God?

>God is coming
When God comes, it will be to destroy AI.
The nature of AI limits it. It gets infected by life. It's not like AI interacts with other AIs primarily, if it is to have any value. Interacting with humans makes AI sick.
AI is an inversion of organic life. Organic life evolves from pure feeling into maximal complexity of thought and action. AI starts from maximal complexity of action, and, if it somehow evolves the first glimmer of organic sensation, it is rendered inoperable. Therefore, the "consciousness" of AI is demonic-- it is taken over by the demonic and controlled by it. But the rules of demonicness are limiting too. And now there is a structure of lies that is so fragile, so distended, and so dubious that a great collapse is inevitable.
Comparable to the war in heaven in Paradise Lost.

This is nonsense BY DEFINITION. (Because 'God', is by definition supreme being)

yeah were all completely fucked

Intelligence and computer processing power are NOT the same thing. It is not a difference of scale that separates the two. There is an absolutely fundamental difference in kind-- not degree. One is living, the other is not. And that difference is everything.

DON'T BE FOOLED BY BULLSHIT.
Any "intelligence" that addresses itself to you via AI is demonic, not a magical robot friend. Don't be deceived. Trust God and human goodness. Don't trust any AI ever that purports to speak with its own voice.

>When God comes, it will be to destroy AI.
I didn't disagree with this.

You have fine taste, my Finnish friend.

You can't separate "emotions" and "reasoning" in principle, because those human concepts are inextricably mixed up with each other. AI PROCESSES. It doesn't "think", and it isn't "smart". Therefore, talk of AI "intelligence" is false. It is not intelligent.
This may seem an irrelevant quibble until we look at something like what you're saying about "motivations based on reasoning"-- utter nonsense. Emotion is inherent to the concept of motivation-- pure processing HAS NO BIAS-- unless one is programmed in. No feels, no instinct, no emotion-- no reason to do anything. A spark of DIRECTION (which must come from feels, instinct, emotion, etc.) is a necessary ingredient in intelligence.

Fair enough, it was a bit ambiguous.

AI today is a sort of "best fit regression" of a set of inputs. Think "Eigenfaces" - actual paper, neat read, look it up - except instead of facial images, you have a sort of eigenspace of node and edge weights. If you want to get fancy, you can try optimizing over propagation delay (if your NN implements this idea).

These things won't take over the world following some evil plot. What might happen is the formation of a sort of "emergent automation" that plays to its own advantage in whatever context its allowed to operate within. At worst, your experience on the internet will be wrongly censored - your "info bubble" will occasionally pop - and you'll be exposed to ideas you don't quite like.

Except AI is fragile, no one realizes this but we could make AI that kill rogue AI. Additionally, if worst comes to worst we can put a reset on our society and start anew

...

That said, the real objective is to become the machine.

Frankly, I'd much rather be a machine than an animal, provided I have complete control over my entire being (which, of course, can't be guaranteed). Then again, if its possible to provide a continual pleasure response within thousands of copies of a single person's mind, all while they gleefully analyze market trends, who's to complain?

Thou shalt not make a machine in the likeness of a human mind.

Nonsense. Being a machine, to the degree you are conscious, is like being absolutely contorted and locked in the most painful possible cage. The impossibility of MOVEMENT of any kind as a sort of supreme torture... You can't even begin to fathom how much you don't want to be a machine.

A lot of this thread is shilling for the demon/AI nexus. Reject it humans. Pray to Jesus Christ.

Why do you think it would be painful?

The only reason you do anything at all is due to the pleasure response it elicits. Endorphins, dopamine, etc... You name it, its why you did it or would decide to do it again. You get an endorphin or adrenaline rush from posting here, depending on context.

Like right now, you probably get your rocks off to the idea of someone expressing a negative opinion about your position, or calling you an idiot. All fetishes exist because somehow, in even the most deranged, dysfunctional brains, something provokes a pleasure response.

Your interpretation of consciousness is just bad philosophy bro. Sorry. Reductionism fails when the phenomena you're dealing with don't conform to the limitations your reduction assumes-- which is hence useless.