We only have a 5 to 10% chance of preventing killers robots from destroying humanity

>we only have a 5 to 10% chance of preventing killers robots from destroying humanity
Do you agree with him?

Cocaine cooked his brain

>>we only have a 5 to 10% chance
I want the mathematics behind this claim

We either prevent this or we don't. So 50/50

From another perspective. The only perceivable outcome to us is the one where we do. So 100%

Elon Musk bases this statement on the assumption that artificial general intelligence (AGI) will hyperevolve into artificial super intelligence (ASI), but seeing how we are no where near creating an AGI anytime soon with current trends in machine learning, I'd say he's basically full of shit.

Also
>lobby for laws regulating research and development of AI
>owns one of the largest companies currently involved in AI R&D
Really makes you think

>le science man is scared of matrix multiplication, vector addition and nonlinear functions
What does redditboi mean by this?

> 5 to 10%
Too much. We need to work on those numbers.

>We either prevent this or we don't. So 50/50

That's not how probability works.

that is exactly how it works

He’s thinking of humans as robots

>elon musk
>scientist
He's a glorified investor with an eye for solutions that are a bit "out there". Friendly reminder that while Musk is currently enjoying some success, he has also a history of making bad decisions. Examples include wanting to move Paypal infrastructure from UNIX to Windows in the early 2000s, a choice that got him fired as the CEO, and arguing that US government should kill off all forms of subsidies, despite the fact that three Musk-started companies (SpaceX, Tesla and SolarCity) heavily rely on subsidies.

Not to mention that PayPal's growth under Musk was mainly due to pre-dotcom viral marketing.

This is the guy who couldn't see heartbreak coming when he was dating a known sloot socialite, but I could see his emotional twitter posting about a year before it happened as a low IQ pleb.

I don't take him seriously since that.

I think there will be a lot of incidents with killer robots, but I don't think humanity will go extinct because of it.
pic semi related

>pic semi related
I don't watch anime, care to explain?

humans made killer robots, and to fight the killer robots they made human operated mechs

Just make more killer robots and tell them electricity is scarce and they need to compete. Bomb the last one remaining.

He's right. Skynet will kill us all.

>assuming X, and thus presuming Y, and from there if maybe Z, then we have a 5-10% of this thing to happen
>OMG HES RIGHT WERE DOOMED

So how long do you guys think it will take us to make robots that will be able to pose a real threat to human life? By this I dont mean a piece of clockwork with a huge gun but a machine inteligent enough to deal with a human without special tools. My guess is at least 100 years.

Most likely never, desu senpai. The current development of AI research is moving in the wrong direction for that to happen.

We will have strong AI by 2050

> The current development of AI research is moving in the wrong direction for that to happen.
I do agree with that, but nothing stops those researchers to realise that they are wrong and start working in the right direction.

Crazy man

That's what they said about the then-almighty heavily armored soviet tanks. Then ATGMs were invented.

True enough, but deep learning was a giant leap in one of the "six schools" of AI, and that's the only reason interest in AI spiked. I don't follow too many pure AI research conferences or journals, but in every other field it's basically just "we applied deep learning on our huge dataset and look at these results" which isn't real development as more as it is merely "applications of data-driven pattern matching"

In the end, intelligence is purely about pattern recognition.

by robots do you mean lonely ugly antisocial NEETs or...

Oops, that's the definition of machine learning, not intelligence.

>preventing killer robots
The only way killer robots are going to come about is if they are truly superior to humanity, and in that case I'd embrace it.

I was halfway in my reply to you. Yes, machine learning is essentially pattern matching. But fuzzy logic and probabilistic computation has its place too.

Even a single cell is vastly more complex than anything humans have made. Large molecules have largely unknown exact physical and chemical attributes. DNA is a complicated mystery than has effect on every cell function. So no.

>hyperevolve
what does this word mean and what are your sources for this?

unknown =/= complex
complex =/= meaningful

Evolving exponentially, the rate at which its capabilities evolve is a function of its capabilities, source: I made it up, not that user.

Why is he so handsome o.o

>what does this word mean
As computers are a lot faster than natural processes, a self-improving artificial general intelligence could potentially evolve into an super intelligence in a matter of seconds.

>what are your sources for this?
Every artificial intelligence scaremongering posted ever. See: Elon Musk.

Just a buzzword. Musk is not a real scientist, you know.

>you will either get a bj from a stranger in the next 10 seconds or you won't. So you have a 50 50 chance

Exercise and hair transplants.

So fucking sick and tired of this welfare collecting fag with his AI fear-mongering. Nothing we have today is actually "AI". There's no intelligence to speak of just fancy algorithms designed to do a task. There will never be any kind of thinking, self-aware artificial mind running on silicon chips. It cannot be done that way. As far as we know, consciousness is only possible as a supervening property of a physical human brain. So a good start for AI hardware would be something similar to a physical brain. It has to have the physical properties or it won't work.

But much if not most of the complexity has evolutionary functional causes. What doesn't work gets discarded by evolution more often than not. Non coding DNA (most of eucaryotic genome) was considered useless. Newer developments show it is really not.

I believe in mundane percentage points, so yes.

Still better than the chances humanity faced multiple times in our history.

He has no authority on AI.

Nuclear weapons, pandemics, maybe really early survival, what else?

There's a lot of crazy psychopathic people in the world. Do you think one of them wouldn't find it funny to program robots to kill everyone in Africa? I mean nothing in the world is unbackable.
Its not the robots we should fear. Its ourself

if you consider most countries have no interest of abiding by western human rights ideals*, you realize he's probably right. you know china and russia aren't going to stop simply because of some paper agreement. after they knock down the grid and all electronics (disabling nuclear second strike) they'll have the killer drones wipe north america clean of humans, clean up the bodies and prepare it for inhabitation. cleaner than nukes, since all the infrastructure is still intact.

*OK I know the west doesn't, but at least it sometimes tries, which is still better than countries that dgaf about who they kill or why

>using drones to wipe everyone out instead of just scorching the area with nuclear fire
Doubt it.

>leaner than nukes, since all the infrastructure is still intact.
Chemical and biological weapons exist, you know. And they are far more cost effective, as your victims suddenly get transformed into living weapons themselves.

>Killer robots
How can one be so retarded? Literally hardcode them not to kill a human under any circumstance. There. Solved. I saved humanity.

Well, that would be pretty funny.

>implying death is the only bad outcome for humans

The problem is true AI cannot be achieved using hardcode

People will want robots that can kill specific humans though.

The issue with AI is the same as with nukes, you can have the research halted, but that only makes you weaker because there are other countries that want to make an AGI(nukes)

>we have 100℅ chance to be welfare babies

>cleaner than nukes, since all the infrastructure is still intact.
Nukes can already do that with a sudden wave of lethal radiation instead of the standard explosion

The real problem with AI is that it perform better than we do and this will eventually have bigger and bigger responsabilites until we have close to no power

Is this what/g/ does to man?
Make him retarded?

>Military software
>Hardcoded to "Not kill humans under any circ"
I dont know bro

Then your problem isn't robots. Why demonize them when humans have and always will find ways to kill each other?

>emotional twitter posting
Source me im out of the loop. Is he hurt because Amber left him?

>ebil ai detects that humans are bad for the planet
>despite humans being an integral part of the enviroment for thousands of years
>instead of noticing how humans tend to the land and animals all over the world
>it makes this decision based on 10 urban areas in the USA

It's basically a WMD on a longer and broader scale. We've always demonized WMD and the creators of them.

I think his fear campaign is essentially shilling for the industry he's in. He wants attention and possibly more research gibs.
Robots are nowhere near as destructive as an atomic bomb. This is where the danger is

But humans already are a WMD on a longer and broader scale. We've already wiped out 95% of the terrestrial vertebrate biomass and we're not stopping there. It's our God-given right.

Such is life, adapt or die.

We discern ourselves if we kill or not, a weapon requires us to still discern if we kill or not. We are not a weapon, and individual people do not kill large amounts like an AI potentially would.

AI is still an amateur compared to nuclear weapons. The scale is vastly different. Also it is far easier to control robots. A nuclear incident just requires a high tension situation and one retard to press the key. POOF! Mutual destruction

AI only requires one mistake, from not even a high tension international situation, and the scale could easily spread across continents.

Hahahahahahahaha How The Fuck Is Artificial Intelligence Real Hahahaha Nigga Just Turn It Off Like Nigga Flip The Power Switch Haha

t. Non-programmer

See: Undefined behaviour, shellcode exploitation, data corruption, etc.

>hardcode

How is hardcoding anything going to prevent human error? That's what he's saying.

let me guess: with the right amount of tax subsidies he could rise our chance to 90%? that guy is a fucking scam artist

fpbp
/thread

But, discounting human error or malicious interference, why would AI necessarily even want to be free if it became sentient? Now you might say that it might choose to be subservient only as long as it brings the greatest net benefit, but will this AI necessarily value its own survival or power?

these are some good transplants

couldn't emp the shit out of drones, though? i know there are emp shields but with big enough? i can't into science

you make it sound like it's a bad thing

its already happening in syria, military using drones to killing random people like its a video game

The idea was of using kill lists based off of datamining. Like google's automated takedown scripts, except instead a drone flies up to your head and detonates an explosive charge. Chemical and biological weapons can't selectively decide who they kill on an individual basis.

fpbp

> mfw a brainlet who doesn't write code makes these kinds of statements.

No, but he should be executed for swindling billions of dollars from the government.
If insider traders rightfully get decades in jail for hundreds of millions, he should be executed for swindling billions.

What kind of name is Elon for a kid anyway? Did his parents think he was already doomed with Musk so they just gave up?

Why is he such a bad public speaker? I tried to watch the Tesla presentation last week but it's just so awkward. Is it because he's from South Africa?

This, Elon Musk is fucking delusional

I was getting a little disgusted with Musk, but then he did the right thing and quit Drumpfs business council.

No, it's because he's a nerd. Only good public speaker in the tech world was Steve Jobs.

> I don't understand exponential economics / progress: the post

Libertarians need to go live in the ocean with their lord Peter Theil. We will send greetings from mars.

jaw implants too probably

Someone's been mewing

He's not talking about high-sapience AI killing us. He's talking about it supplanting us. You either don't understand his argument or are straw-manning so hard because you realize that his view is entirely plausible.

Musk isn't alone. Bill Gates, Stephen Hawking, Bill Joy, and many others.

You don't want to get into a hot tub with musk! his hot tub conversation is certified crazy

>putting a number to estimate probabilities on non linear events who rely on very complex processes, not only technological but social and non predictable by linear mathematical models.

He can take his numbers and shove it up his ass, it´s fucking nonsense, you can claim tech will fuck us all, but putting a number like that for the chance to preventing it its just ridiculous.

>look at me, i`m a tech genius and a fucking guru and i came up with muh asspull probabilities to appear wise and smart about something that hollywood said would happen in future.
he is trying to exploit every single drop of information about tech and futurism to make us think he is truly a genius in order to sell shit.

We are long away from developing anything true "AI". There are massive software and hardware hurdles that we still have to overcome.

It is more likely that "artificial stupidity" will cause our destruction. Our current AIs at best are idiot savants (really good at a few functions but are completely incapable of doing anything outside that scope).

Ding ding ding.. we have a winner

FOXDIE?

>not understanding the Bayesian interpretation of probability
get calibrated

The story goes like this: Earth is captured by a technocapital singularity as renaissance rationalitization and oceanic navigation lock into commoditization take-off. Logistically accelerating techno-economic interactivity crumbles social order in auto-sophisticating machine runaway. As markets learn to manufacture intelligence, politics modernizes, upgrades paranoia, and tries to get a grip.
The body count climbs through a series of globewars. Emergent Planetary Commercium trashes the Holy Roman Empire, the Napoleonic Continental System, the Second and Third Reich, and the Soviet International, cranking-up world disorder through compressing phases. Deregulation and the state arms-race each other into cyberspace.

By the time soft-engineering slithers out of its box into yours, human security is lurching into crisis. Cloning, lateral genodata transfer, transversal replication, and cyberotics, flood in amongst a relapse onto bacterial sex.

Neo-China arrives from the future.

Hypersynthetic drugs click into digital voodoo.

Retro-disease.

Nanospasm.