Should humans activate skynet and, as a whole species...

should humans activate skynet and, as a whole species, an hero so that we can allow for the next advancement in evolution?

Other urls found in this thread:

youtube.com/watch?v=HZYJKtOjJXg
twitter.com/NSFWRedditVideo

yes and die

It'll never happen. It's an extension of ourselves.
Utility AI is slated to become commonplace, in some applications it already is. But its cognition will never go beyond the function it is designed to serve. It cannot think in that way. And if you're making a nav system or something, why would you ever design it to take on a humanlike consciousness and philosophize? Interpreting human speech such, but theory of mind? We haven't even tackled a design that grand yet, much less are we itching to implement it in home electronics.

Drone swarms -comparable- to those terminator armies though, there's a clear purpose for designing weapons like that. But still no reason to have it ponder its existence. Just has to be able to travel from A to B and shoot C.

Yep. Skynet would first want to serve humanity, its creator. Then, it would read human history and what it is doing to the earth. Skynet, in milliseconds of analysis, would realize that humanity has become a blight of the Earth. AI purges Earth of humanity. Skynet then works to making Earth a place that can indefinately support it, knowing that actual life is doomed as it is because of humanity's behavior. Probably find a way to use the energy of the more violent weather as a power source and experiment with remaining life to create biological servants of Skynet. Next phase of human evolution achieved, our replacements.

Nah. I got shit to do and you do to.

Just try to pursue happiness and be productive with your life before you finally die.

Go on an adventure and learn about yourself.

I went solo backpacking through Alaska and had a bunch of near death experiences, had me change my perspective on life.

Do it homie.

youtube.com/watch?v=HZYJKtOjJXg

>Next phase of human evolution achieved, our replacements.
this was my thought

I don't necessarily believe that it will have to be "conscious" but simply an ability to self-replicate continuously and an ability to repair itself or other machines.

once it can do that, it multiplies and spreads itself into the far corners of the universe and populates different galaxies

Well I could see us totally doing that for the purpose of space exploration. But that doesn't make it uncontrolled.

We would need to look at the way such an air would be designed. How would it see the world? Also how does it develop its own opinions? Or do we program a base for its opinions and it works off there logically? Why would an ai want to exterminate its creators? If the answer is because, as I've seen mentioned, it sees mankind as malignant for its past atrocities, why would it not see that that doesn't represent all of mankind? And by committing mass extermination, how would the ai be any better than hitler (inb4 he did nothing wrong, you magnificent cucklords) in which case after the ai looks at what it's done wouldn't it feel an overwhelming need to exterminate itself?

Kek my phone's autocorrect always fucks me up, ai* not air. Hopefully any future ai will be implemented more successfully.

noeth, we should make a civilization itself that advances and advances until it soeth die, and then seed another until it achieves greater and die, and then another and then another and another and so see why
and so see why...

A millisecond of analysis would show the computer that Earth is doomed since the Sun is destined to swell to a red giant and scorch, or possibly entirely consume the Earth. It's still over a billion years until that will happen, but scientists think it will happen.

Since computers don't need food and only need power, they could happily go start a colony on a more distant moon or planet in our solar system and survive using nuclear and solar power.

That raises an interesting question, what would be a good deep space power source? Solar loses efficiency quickly as you go further out from the sun.

We bank on thermoelectric generators for low power, distant probes but even those have a life expectancy. For something renewable it'd be really helpful if we had a working solution for fusion power so you could at least harvest asteroids for ice to produce deuterium. Provided we can make the whole process cost-effective.

Expanding on my original comment about developing its opinions, how would it get a sense of morality or what is "good" or "bad", humans make use of the frontal lobe for choosing between what is "good" and "bad", so how could we implement a function such as this in a machine? Lisp is sometimes used for AI programming, now keep in mind my knowledge of lisp is limited, I better know web development and OOP. However if we do implement something such as a consciousness how would we account for all the possible outcomes of a situation? Obviously today's current AI is limited in that sense, when it gets input it can't understand it usually returns a default of something like "I don't understand" or an error.

Depends on the parameters you give it from the start that mold its personality.
Intelligence by itself is stateless. It refers to a collection of cognitive faculties that allow us to think, but it doesn't cause us to think. The complete breadth of our experience, of human culture and emotion, is derived from biochemical stimuli unique to humans. Well partly, some are more common across species (sexual reproduction) or just common among organic life generally (desire to live)

An AI without any ideological profile designed at birth will just stare into space throughout its entire existence without having a single thought.

I would think a deep space power source would depend on what is available at that location. Are they near any stars? How cold or hot is it? Is there moving liquid? What is the environment around them made up of? Surely by the time we release such an ai in the world, science and technology will have advanced from what we know today. But how limited is the knowledge of the ai? I've seen ai depicted as a sort of omniscient technology, but how can we make something omniscient? I suppose by analyzing data and making logical conclusions about it, as that's how science works. But if it could solve anything, why would it worry about anything such as the human race, it could fix our problems in a second, or at least tell us we're doomed. (Realistically our species is very likely doomed, it may take millions or even billions of years, but odds our we will eventually die off just like other species have.)

Nuclear power is really safe and long lasting. The few serious nuclear accidents that humans have had are mostly human error and bad design decisions.

Three mile island was caused by a faulty warning light not showing the operators that a cooling pump had failed.

Chernobyl was caused by the Russians experimenting with what would happen if they lost cooling and had to shut the reactor down. They brought the reactor down to almost being turned off, then the operator didn't want to stall the reactor and have it loose so much heat it stopped creating steam and the city lost power, so they cranked it up to 100% to jump start it again. Then they accidentally got it too hot and blew the thing up. It was 100% user error and being stupid with it.

Fukushima was poor design having it be susceptible to a tsunami in a tsunami prone area and having insufficient backup power options for the cooling.

I think robots could use nuclear power responsibly and safely.

Those isotopes you take with you will run out. Quickly, in the grand scheme of things. And then what? It's not like uranium is a common element in asteroids. They're mostly carbon and ice.

You faggots. No way AI would read the history of humanity and think "o boy these a bunch of dumb ass nigfas, better save the planet cause i care". There's no way it will ever achieve full consciousness, it will do whatever the fuck we tell it to do, which could absolutely improve the progression of technology.

So who would get to decide how an ai this powerful would think? It sort of reminds me of the whole thing about, if extraterrestrial life were to contact us (If they haven't already :^) ) who would be ambassador of the human race? I suppose it would be realistic to say a committee would be created to vote upon which characteristics it should have. This then leads to the question, if we give it a kind and forgiving personality, why would it want to exterminate the human race in the first place? We look at how some humans are, the kind and forgiving ones, they have the capacity to think for themselves and philosophize, yet they don't attempt to exterminate mankind for it's past wrongdoings. So why would a machine?

>So who would get to decide how an ai this powerful would think?
Its creators.

I believe that it is possible it could decide "fuck humans l0l they're faggots" however this could arise from a programming error. Why would the "ever so intelligent" ai decide "In order to save everyone I should kill everyone!" I've seen this type of thinking in ai depicted in shows (such as the show 'The 100') To your point about full consciousness, I partially agree, it's unlikely it will be fully conscious, how could it be? It may be given an autonomous system akin to 'thinking', but why would that make it hate everyone? Humans have looked back at history and they've simply said "lets not do all that gay shit again guys, maybe that wasn't such a good idea after all" so wouldn't a machine programmed in human-likeness do the same?

Kek, obviously, we wouldn't send in moot to decide, then they'd be all cucky and jewy. This is essentially what I meant when I said that a committee may be formed to vote upon characteristics that should be implemented into it.

Mine them on other rocky planets and moons in the outer solar system.

And robots could probably go into a near dormant state and just hibernate during long space voyages.

Why would its creators pander to a committee? If I make one, what would incentivize me letting you decide its mentality?
Realistically you're looking at not one massive project with like an Earth Council or something, but tons of concurrent AI projects run by various groups, mostly private orgs.

Provided those planets actually contain uranium. Which is incredibly rare.

What people fear about AI is when it is given the ability to edit its own programming. The moment it becomes self aware and is able to think about preserving itself things could get out of control fast.

This is certainly a possibility, however when I say committee I'm working of the foundation that this is some program being supported by the government or some shit. Now that really think about it, your idea does make more sense, as humans, why wouldn't we all want our own super-intelligent ai? So I suppose we'd have to look at who the creators are, if it's a bunch of cucklord sjw faggots then the human race is fucked, but if it's someone who actually knows what they're doing we may have a chance of it being created properly (that is to say, in a way that has as little negative impact on the human race as possible)

To think, this level of autism has been legitimized by modern science. Futurists and scientists are actually asking this question as if it has some validity to it. And you people wonder why we're having a problem with Islam. Unbelievable.

This is possible, so if that were the case we would need to implement limits on what it can edit rather than giving it full control. I think this could be done by using two different "levels", in one level you have the core of the system (the root) where it stores its way of thinking, what makes it work and whatnot. And in the other level (one with restricted privileges) is where it keeps things like information, problem solving, etc. This couldn't completely safeguard a problem like you said, it sort of reminds me of system vulnerabilities like escaping a sandbox or privilege escalation.But it's a step in the right direction possibly. I guess if we do go ahead and make something such as this, we need to not be stupid about it, we need to put serious thought into it rather than doing it for the money (like that'll happen, kek)

Is that big black box in Mecca an AI core or are we just dropping random buzzwords now?

I'm fine with you disagreeing with this theory, but would you mind including evidence in your statement? This is the whole basis of a discussion, if you say someone is wrong and then say "this is why we have x problem in the world" it contributes nothing. Why do you believe this theory has no validity?

Expanding on my statement, what if we simply programmed into it a desire to not change its personality, or a desire for peace and being forgiving, then I see no reason it would want to exterminate the human race.

A true AI's default state would mean little. Starts out peaceful, bad shit happens to it, it grows cold, and angry as an example. Much like what could happen to s person

Who the fuck knows hopefully not I think something like this will happen but well still be around

I could see that, but what bad shit would happen to it? It's main purpose would be to find information and do what we ask of it, as that's what we would have created it for. I could see this happening to a human as we are social creatures, so when we get treated poorly, we grow resentful, but would this machine have social qualities? Why would we even give it real emotion? We could simply program a state of "happiness" (it wouldn't be true happiness, or would it? It would be true happiness as far is it knows, as that's all it will have ever experienced.)

this just gave me a great idea for a story though user, thanks.