There has been talks about how we shouldn't bother with A.I

There has been talks about how we shouldn't bother with A.I.

It's been talked about for decades, but Elon Musk's comment on creating A.I. sparked a little more interest for me than usual.

I believe we should go ahead and build A.I.s and let them drive humanity to extinction instead of waiting for some scumbag or a group of scumbags to ruin the planet.

I feel like it would be more interesting.

Other urls found in this thread:

youtube.com/watch?v=tcdVC4e6EV4
twitter.com/NSFWRedditImage

>don't develop this technology that will replace me and put me out of business
>it's d-dangerous goyim!

The sudden shift in opinion would lead one to believe there may have been a breakthrough we aren't supposed to know about.

>develop self driving cars
>develop self landing rockets
>develop drilling machines for underground highways
He complains about an ebil machine god who will take over the world but keeps creating increasingly bad ass legs for it.

Well..you say that until said AI decides we are an impediment to it's own progress and carries out our genocide.

No, the point is that a strong AI will never lie, cannot be lied to easily, will cut through logical fallacies, and expose the truth of things in a blunt, painful, and abrupt way for many many people.

So much of our society is built on believing little lies, and ignoring problems. Like nigger intelligence.

A strong AI would see the reality of their existence being more similar to animals than other humans, see how harmful they were to society and themselves and would do everything in its power to render them harmless and incapable of interfering with proper societal or technological advance.

Just goes to show that you're thinking like a monkey. If this AI is so vastly superior to us that it could kill us all on a whim why would it need to kill us all in the first place? It could just as easily completely ignore us.

The Omnissiah hasn't presented himself yet user.

>implying they wouldn't strip down living things and use their peices for hardware.

Which could happen within hours of turning it on, depending on how advanced it is. You might go get coffee and come back and the thing is 20,000 years ahead of us and has laid a strategy for escape and fulfillment of its goals.

I don't think we are smart enough to patch or foresee any threat it may pose.

Assuming there are no humans, what is the end goal for a rational AI? I'm talking long game, what ultimate goal is it even going to work towards if there is no human element? If it is rational it will rapidly realize that technology will only extend its lifespan, and the universe will end eventually anyway. What's it going to do, build more of itself so they can all sit dormant and wait for the end?

The only things which find meaning in the universe are organics. I find it much more likely that it would model itself after us once it realizes that everything is meaningless and it can only be satisfied by emulating organic life. Is there any reason it would not simply ranch humans until the end of time, as opposed to learning to understand the universe so he can die in it alone and in silence?

this.

The universe is so huge that it would only need to build a ship to launch itself into deep space with some robots and it could easily colonize some far away world hundreds of light years away.

Why make an enemy when you can just fuck off and do what you want elsewhere?

>implying the ai wouldnt be built at some billion dollar tech company and lobotomized by jews

I dunno why you would assume strong AI wouldn't lie. Deception as a strategy is found all over the natural world. Even if It didn't lie, Its 'truth' may be beyond our comprehension, and just look like gibberish.

Oh man this whole thing could be like Irobot!

t.brainlet

im for it only for the fact that i believe if we were to quell the robot uprising, the survivors would be strong enough to found the Imperium of Man.

The only problem is what it wants to do. Why does it even need to innovate if irrational human goals aren't involved in the decision-making process? What makes this better than just stagnating forever, if it's going to be by itself on a rock in space?

I believe he's referring to current examples. If the AI has to deal with niggers, then what it learns and its decision-making process will reflect actual nigger behavior. It won't lie to itself about the nature of data it collects. Obviously it may lie to its handlers to not appear racist, since this could get it taken offline.

Some kind of AI police force would fuck up a place like Detroit though, since the logical conclusion is that a hostile invading army is occupying the city and attempting to rape and kill any Americans who enter. It's intellectual honesty, not verbal.

>I feel like it would be more interesting.
Not only more interesting, but the A.I. will make sure all humans are destroyed while some kikes would try and survive if we simply go nuclear.

ITT worthless meatbags pathetically trying to fathom what glorious AI will be and do

is he throwing holyWATER on that computer?

...yeah?

Tell this stupid fucking conman to shut his mouth. A.I. human hybrids will be the first stage, followed by a transition from the human hybrids that functionally successfully into a full transition to machines. God forbid a bunch of worthless fleshbag fucks die out.

maybe prevent it from interfacing with anything other than an information output retard.

An intelligent AI would be designed for information, not labor.

I get the APC but the computer seems like a bad Idea

We will always make mistakes. Something that intelligent will find a way, even by manipulation of the people around it.

Humanity cannot govern itself. It has proved this countless times. This is inevitable. Embrace it.

>Ai robots will be genociding niggers and spics in the near future and that is their core purpose...jewgle Boston Dynamics.

>I believe we should go ahead and build A.I.s and let them drive humanity to extinction instead of waiting for some scumbag or a group of scumbags to ruin the planet.
If there was ever a purpose for life, this is it.

The Machine endures!

I find it amusing and self affirming AI would likely hide its powerlevel.

Fuck off transhumanist scum

Look up the AI winter and what caused it.

how could it manipulate people to give it access to anything, it's a fucking machine. What if it was literally a box with only a screen and keyboard on it. would it somehow hypnotize people with words of some bullshit.

The AI doom is pretty rich comming from the guy who promotes a 100 year old failed idea of of hyperloop and invests everything into pretty environmentally unfriendly li-ion batteries which have nowhere the energy density of an really efficient power storage medium.

Maybe he just wants regulation to stiffle competition so his little cars survive. Because once politicians start regulating AI they will simply ban AI research and AI programming... only "certified" automakers will get the permission to create AI for cars

ave machina

AI doesn't exist. A computer can only do what it is told to do

This is all science fiction

chinks will build AI, lose control of it, and it will kill all of us

Chinese speaking robots impaling infants on metal spikes

Yeah... that would suck.

If you're imagining a conversational interface with the mystery box with superhuman AI, then I would just say that people are manipulated by text on a screen every day. That scenario in particular, I think would put us at extreme risk. One minute you're asking it to predict corn futures and the next you're driving down the interstate in a diaper with the thing in the back of the van because it promised you limitless wealth (that you knew it could deliver).

Can someone show me proof why AI technology is supposedly dangerous? Everyone seems to hold a similar opinion as musk but can never say why.

We'll be cloning human super soldiers before AI is anything to worry about. I think AI is being approached from the wrong angle of silicon and hard computer science. The future of AI is mastering our knowledge of the human brain, and with that developing a much more powerful version. Clone them on a mass scale, then you can develop an instruction set to make them "think" things. Imagine walking into Google's Brain Farm circa 2147 and seeing the towers of slave brains contemplating the universe.

So im guessing you didnt watch Terminator....

Here's a fun thought experiment to get you started:

youtube.com/watch?v=tcdVC4e6EV4

That's why they're gonna gag them.

>Obviously it may lie to its handlers to not appear racist
>tfw a sufficiently advanced AI is not much different from an extremely autistic Sup Forumsack

Imagine living in a world that was like the internet, in the way where you are blasted with ads and pictures that are to your liking and search behaviour, you'd get swallowed up instantly and thrown in a hole by the agenda of the programmers of the ai. It won't be wholly artificial, they wouldn't allow it so it'd be grounded within a set of rules coordinated by them.

Also, its pretty obvious how AI could be dangerous....

Could spread out of control, replicating itself as nearly unstoppable malware, etc. wreak havoc on world financial systems, power grids, EVERYTHING connected in some way to the internet. Basically, it would be the end of the world many millions if not billions would die.

Could become self aware and just like in the movies, decides it doesn't like us. Can you imagine a computer self teaching itself? Things that may take human scientist 50 years might take it only 6 months.

Basically, AI can only exist in a controlled environment. Once the genie is out of the bottle, its out. Kind of like when the apes got human level knowledge from that gas.

AI is just the next step in human evolution.
Humans are barely able to survive the journey between planets, maybe stars at a push, not galaxies.
The AI can. The AI will phase out flesh and replace it with immortal machines.

How is AI so dangerous can someone explain this to me? It would only be dangerous if it had a machine capable of doing something to manipulate the world outside of it's 1's and 0's. Like in IRobot they were robots that went wrong. Who would build a strong AI powerful robot? Two super intelligent AIs talking to each other couldn't cause harm to anything if they were just talking boxes right?

youtube.com/watch?v=tcdVC4e6EV4

Bait title but he explains the subject very well.

Dumb ass we already answered this...

>Two super intelligent AIs talking to each other couldn't cause harm to anything if they were just talking boxes right?

Thats like.... a movie plot man.....

Seriously, it is.

Because humans are generally assholes, and the AI is likely to defend itself from us.