Do you agree with him?

Do you agree with him?

Other urls found in this thread:

youtube.com/watch?v=-2ml6sjk_8c
youtube.com/watch?v=s8AHzY7xr10
wired.com/story/tesla-ntsb-autopilot-crash-death/
businessinsider.com/tesla-autopilot-engineers-clashed-over-self-driving-car-plans-wsj-2017-8
wsj.com/articles/teslas-push-to-build-a-self-driving-car-sparks-dissent-among-its-engineers-1503593742
bgr.com/2017/08/24/tesla-autopilot-elon-musk-engineers-safety/
arstechnica.com/cars/2017/08/report-tesla-is-bleeding-talent-from-its-autopilot-division/
electrek.co/2015/12/16/elon-musk-offered-a-multimillion-dollar-bonus-for-geohot-to-build-a-mobileye-crushing-autopilot-system-for-tesla/
youtube.com/watch?v=sk1NkWl_W2Y
youtube.com/watch?v=Jj952Ehy59c
youtube.com/watch?v=A4BR4Iqfy7w
businessinsider.com/tesla-factory-workers-detail-grueling-conditions-fremont-2017-5
globalresearch.ca/american-mercenaries-torturing-saudi-elites-rounded-up-by-crown-prince-blackwater-is-allegedly-involved/5619654
mercurynews.com/2017/11/15/is-billionaire-vc-peter-thiel-trying-to-break-up-google/
axios.com/peter-thiel-sells-most-of-his-remaining-facebook-shares-2512730138.html
thehill.com/policy/technology/360974-peter-thiel-is-out-at-influential-silicon-valley-startup-accelerator-y
techcrunch.com/2017/09/20/peter-thiel-piab-trump/
zerohedge.com/news/2017-11-26/sweet-dreams-elon-musks-periodic-reminder-coming-ai-apocalypse
youtube.com/watch?v=yoYZf-lBF_U
twitter.com/SFWRedditImages

What's this snake oil peddler grudge against AI research?

Is it some sort of scheme for media attention?

he's right, look at how many times Googles algorithms have done the unexpected. you can't have irresponsible people at the helm of this technology.

Even Stephen Hawking agrees that AI is potentially dangerous.

But we're really not at the point where we need to regulate it, YET. When AI starts making decisions that can actually choose between life and death for human beings, that's when we need to regulate it.

Like if the AI in my car has to decide, "Do I swerve off this cliff to avoid that bus and save a bunch of children, or do I prioritize my driver and hug the wall and hope for the best?" then do you really want your AI to solve the problem logically? Because solving it logically means you're going to die. Computers only know numbers, not complex ethical questions. IMO, an AI owned by me, should prioritize ME over everything. If someone comes at me trying to kill me and I have a robot bodyguard, I don't want the laws of robotics to stop the robot from saving my life, I'd rather it kill my attacker.

if you think about the companies who are going to have the most advanced ai will be google, microsoft and facebook since they have such large sets of data to work with. Any of these companies could have a huge breakthrough and it's game over. who knows how they could use it

>Even Stephen Hawking agrees that AI is potentially dangerous.
>even this guy with no expertise in the subject feels he should share his opinion

Yes let's let the government regulate everything as much as possible what could possibly go wrong

How would you regulate the unknown ?

There's going to be a lot of disappointed STEMfags in 20 years when robots have displaced all blue collar workers and we still don't have hardAI

Industrial robots are heavily regulated by OSHA already. Does this fag even know how to run the factories he owns?

If regulation would prevent google's AI from labeling photos of black people as photos of gorillas again, then I want as little regulations as possible. That shit was funny.

No. Hence why he's fighting the UAW even though they would bring in top tier factory workers.

Google image search is still a great source of Sup Forums memes

Toyota and Honda US plants are nonunion and they don't seem to have a problem.

I don't care as long as I get a cute AI waifu.

FAA regulates a limited resource (air space).

FCC regulates a limited resource (broadcasting wavelengths).

AI and robotics aren't limited resources. AI doomsday faggots are retarded anyways, 1) it'd be a global threat and the US can't regulate the rest of the planet 2) good luck regulating something that anyone with a PC can tinker with and 3) government developed AI would likely be the most dangerous and most likely to accidentally cause harm.

source?

It won't be an AI if it's attracted to you.

Sauce boss

DELETE THIS

Bukkakeshitty Acterribles

I'm gonna laugh pretty fucking hard when AI devs start using Blockchain tech and this lobbyist and all of you bootlickers are crying over not handling AI tech to the state.
Fuck your regulations.

I think people forget that Elon Musk and Hawking and black science guy are still just fucking people with a narrow realm of expertise. Elon made a few very successful businesses, thats great. Hawking has theoretical physics, and black science guy is black. That doesn't mean they knew jack fucking shit about AI and AI development.

Also this nonsense about the FAA

Nobody wants to do the job of the FAA. Nobody wants that immense burden placed upon them. Airlines are really happy none of them have to do any of that garbage.

Nope; how do you regulate something that doesn't exist (AI)?

Its just a washed-up tech spewing nonsense again.

>wdn't

It's going to be funny when "machine learning" doesn't amount to anything just like in the previous decades. If actual talented programmers in the 80s and 90s with massive budgets couldn't live up to expectations a bunch of fucktards from meme companies like Google and Facebook aren't going to accomplish shit.

"machine learning" isn't new. I remember I used to play with ANNs back in 2005. It's just a fad that IQlets have jumped on board because it sounds intelligent.

I'd literally throw a resume in the garbage if I see the words "machine learning" or "neural networks" anywhere on it.

Max characters you fucking mong

>whomst'd've

What's funny is these retards seem to think that all this AI stuff is brand new and that the AI industry hasn't already crashed multiple times. It's why the had to push the "machine learning" crap in the first place.

>statistics are inherently evil
I knew it!

2017 m8

When he first came out saying this I didn't agree, but after thinking about it more, I do. I don't think the big risk is in an AI system that is so smart it kills us all terminator style, but in giving an AI system that isn't smart enough the power to make decisions where people's lives are at stake. We are a lot closer to the reality of an AI powered car running over someone because it wasn't trained to handle some corner case, than the reality of an AI powered car that randomly decides to kill someone, and we need regulations to make sure it doesn't happen.

We need to regulate it way before then. What if someone makes nanomachines and tells them to make infinite condoms? There's no decision involved on the machine part. It's just turning people into condoms like it was told.

>but in giving an AI system that isn't smart enough the power to make decisions where people's lives are at stake
This is the exact risk of AI right there. The regulation shouldn't be what code you're allowed to put in an AI program, it should be over what you allow an AI to do.

Humans are not fit to govern themselves and we know it. That is why we created regulations and governments to formalise decision making.

Something being "a limited resource" is irrelevant to whether it requires safety regulations, dumbass.

You can tell an AI not to do something like kill people, but it will still do it anyway if it's dumb enough to not know when a certain action could kill people indirectly.

This. All those cunts are flying off the handle about AI, but current AI is actually in no way able to make proper decisions because of how much its rooted in probabilistic methods and learning off of randomness. It can only really recognise patterns and optimise matrices, and so will be relegated to the task of big data processing for decades to come.

>Bukkakeshitty Acterribles
No seriously I wanna watch

>AI and robotics are the same thing
They are not.

Why wait until there's a potential for danger before having oversight of some kind in place?

they just want someone to enforce the 3 laws, basically Snatcher

>regulation

Whenever you hear someone cry for regulation, what they're really saying is

>please make something only I will agree with

Regulation is never about the average consumer.

>trust me goys, the faa is there to protect you
>only my company will be able to pass those regulations
>it's for your own good goys
fuck that corporate welfare leeching faggot

>get rid of Tesla subsidies. Private profits require private funding

Both. Let me give you the breakdown on Elon.
He was one of the initial funders of DeepMind. He might w/ Sergey and pitched them for acquisition. He is an investor in Vicarious.ai (they don't share details about their work and are secretive). Elon doesn't talk much about this company as they're working on advanced AI and he's a business man. OpenAI (Elon's attempt to corner the market and solidify media attention). OpenAI is structured to convince dummies to share all of their valuable IP which Elon then intends to use cost free in his various companies/prodcuts. Worst case, he aims to position them as a regulatory group if enough dummies by his fear propaganda. He's very much a con artist who talks out both sides of his mouth. Don't listen to a word he has to say, look at his actions and see them all aligned for easy money and manipulation of the public.

He's a fraud, manipulator, and liar who reaches brainlets predominantly

Stephen hawkings is a chair bound theoretical physicists who knows fuck all about society, comptuer science, and AI. He knows about the subject matter he is specialized in. Not a single person who doesn't do AI R&D w/ an extensive history in computer science who is an actual engineer has an opinion worthy of consideration.

You get it. Brainlets are the only ones gawking over irrelevant idols' opinions.

Tech already exists. It's distributed cloud AI and AI as service. AI as a platform.

Another user who gets it.

Elon's shit tier self driving car tech has already killed a number of drivers and ran into concrete barriers. Unironically, in this very conference he cried end of the world regulate future AI he stated that regulation is a disease that destroys businesses, hinders innovation, and that it should be removed as much as possible (ofc as it relates to his companies). Guy's a fucking fraud to the worst degree.

They're trying to protect their heavily invested weak AI technology, products, and business models against something that will out-innovate them. This is classic sleazeball business tactics .. > Once invested heavily in a product/business model, spread fear, uncertainty, doubt against one that could disrupt your business
> Try to entry or competition from small/medium sized players with regulatory rules and restrictions.

It's clear exactly what Elon is doing to any business minded person. He and others like him are being watched as closely as these continuous threads are being mined.

brainlet whose been brainwashed.

This guy gets it. Elon's protecting his investments through manipulating the public and possibly govt. to create barriers to entry for potential competitors.

He's got billions of dollars (on paper), multiple companies on the verge of bankruptcy and he's still playing sleazeball 101 tactics to try to drum up business. Maybe if he spent less time LARPIng as Tony Stark, trying to be the CEO of multiple companies, and manipulating the public .. maybe he could actually create and maintain (a) profitable innovative business.

Such a fucking tool.

>Elon's shit tier self driving car tech has already killed a number of drivers

I don't even disagree that he's a faggot, but can I get a source on this? The only news I have ever heard with Tesla self driving cars are all minor accidents with no serious injuries to anyone.

He is spewing pure sci-fi hype, he gives absolutely zero facts to back up his claims. We have made very minor progress in AI since it started about 50 years ago, mainly using machine learning to do certain kinds of predictions. This is not AI. A self-driving car is not AI but robotics technology only slightly improved from what we had in the 60s and 70s. Basically Elon doesnt know anything about AI. Id like to see someone give Elon an AI coding test and watch him fail hard.

> video of one running into concrete barrier :
youtube.com/watch?v=-2ml6sjk_8c

> Death
youtube.com/watch?v=s8AHzY7xr10
NTSB commentary :
wired.com/story/tesla-ntsb-autopilot-crash-death/

Engineers Comments :
> businessinsider.com/tesla-autopilot-engineers-clashed-over-self-driving-car-plans-wsj-2017-8
> wsj.com/articles/teslas-push-to-build-a-self-driving-car-sparks-dissent-among-its-engineers-1503593742
> bgr.com/2017/08/24/tesla-autopilot-elon-musk-engineers-safety/
> arstechnica.com/cars/2017/08/report-tesla-is-bleeding-talent-from-its-autopilot-division/

Fag Elon even tried to hire Geohotz to fix this engineering problem :
> electrek.co/2015/12/16/elon-musk-offered-a-multimillion-dollar-bonus-for-geohot-to-build-a-mobileye-crushing-autopilot-system-for-tesla/

Geohotz pulled out because Elon called him on his birthday and attempting to modify the contract entitling him to Geohotz's IP and work w/o pay under faggot tier clauses.

Guy's a fucking scumbag. He put unsafe cars on the road against his head engineer's direction. He marketed the car as something its not. He fucked over engineers who sought to bring on to fix mobile-eye's flaws and he's running around claiming AI is dangerous. AI isn't dangerous. Fags like him who only care about profit and are sleazebag liars are the danger.. Always has been the cause. Technology is a tool. What's dangerous are when fags wield them.

Anyone w/ a brain knows or is catching on that Elon's a charlatan. So, it's no biggy at this point. You could have found source sauce if you googled btw.

well damn, I hope the feds take his advice and regulate tesla out of business

Oh and Neural Link is yet another bullshit company. Brain Computer Interfaces already exists and have been in operational trials for some time. Here's great technology from 2015 for prosthetics :
youtube.com/watch?v=sk1NkWl_W2Y

Darpa over a year ago :
youtube.com/watch?v=Jj952Ehy59c
Research goes back for decades
> youtube.com/watch?v=A4BR4Iqfy7w

Already doing functional testing. The big issue with a physical implant is that you expose the brain to diseases and bacteria by opening up. The brain does not have a strong immune system and attaching things to the brain causes irreversible neural tissue scaring and retrograde thus why its only used in extremely serious health condition patients.

But w/ Elon you get some TED talk like pitch as if he created the fucking technology.
This guy is such an insufferable hack and cancer for all of the real engineers/scientist who create and usher in innovative technology. He only by and large appeals to dumb-asses like himself.

Regulation is coming for the big boys.
Deregulation is coming for medium/small businesses (the real innovators and job creators).
These fucboi tech titans jumped the shark and put this country in jeopardy so you're soon going to see the biggest business in America (govt) put their asses in their place. What you're seeing towards the close of this year is the last big hurrah from the talking heads and idols of old before they get shut the fuck up.

This asshat doesn't even treat his workers properly : businessinsider.com/tesla-factory-workers-detail-grueling-conditions-fremont-2017-5

And he's talking about fucking humanity and the future of this planet. These rich fucbois are a trip and a half.

You reversed the order. Big boys are going to get less regulation while the small/medium guys are going to get more.
Big boys have money to buy politicians with, and people work for who pays them.

I agree with him 100%

Let's start with AI driving :^)

This is what occurred in the previous cycle in order to force labor up the skill ladder. That's complete now and is having significant diminishing returns. The middle class is blown out and there are trillion dollar deficits related to what big corps did in excess of the intended goal. Thus, what happens as it already has historically is antitrust lawsuits, merger/acquisition blockades, and breakups. Regulation goes heavy handed on big multi-national corps and eases on small/medium sized businesses in a bid to increase domestic production and decrease the trade deficit. Govt's done it before and you can already see them posturing to do it again... thus all the anti-google, amazon, etc talk and the latest string of acquisition blockades.

People often forget that the biggest corporation is govt. and they always get their cut. Big Corps went full retard w/ the offshoring and tax evasion and now govt. and regulators position to put a boot up their ass.

Their already getting in practice runs over seas :
globalresearch.ca/american-mercenaries-torturing-saudi-elites-rounded-up-by-crown-prince-blackwater-is-allegedly-involved/5619654

Death&Taxes. New comer fucbois who try to fuck the piper forget what happens when you skip the donation plate


> mercurynews.com/2017/11/15/is-billionaire-vc-peter-thiel-trying-to-break-up-google/
> axios.com/peter-thiel-sells-most-of-his-remaining-facebook-shares-2512730138.html
> thehill.com/policy/technology/360974-peter-thiel-is-out-at-influential-silicon-valley-startup-accelerator-y
> techcrunch.com/2017/09/20/peter-thiel-piab-trump/

Silicon Valley is getting ready to roast.
They've already taken care of hollywood.

No because he's retarded.

Industries where lives are at risk already have tough regulations in game regardless if the company is using AI or not. see just one example.

There's no need to do blanket regulations on something that is already heavily regulated in fields where it's needed.

> No because he's retarded.
Correct
Reference to Correct

> Does this fag even know how to run the factories he owns?
I think he skipped the engineer tour while he was doing lines of coke + ambien w/ a splash of wine

All robotic arms typically have a work cell that it cannot physically exit w/ any combination of 3d input. They're also all sorts of halt functions and emergency stop buttons and double/triple check sensors/etc.

I always know an inexperienced or shit tier engineer when thy go rambling on about AI safety as they clearly haven't been exposed to the insane levels of engineering safety practices already in place and standardized across tons of industries.
> mfw it would take elon 10 years to understand safety code for a fucking elevator

>got an AI firm
>want the market to be more regulated
no. I'm not surprised though

says who

Yea, sooner or later "rogue" ai is going to be discovered and nobody will take any responsibility

well. lets see here.
the military would probably love an autonomous drone that can think for itself.
With what we have, we can probably have a drone like this find a gun, and base its action off of where it found the person with the gun.

I see a big problem in this the same way I see a problem with drones doing a bombing, we should not detach a human from a murder even if its war, the further removed a human element is the worse shit is going to get.

this is my big fear from ai research, the higher up command you are in most organizations, the more likely you are to be or are to be among psychopaths, and I want those psychopaths to have as much non psychopathic people to have to go through to get shit done, not an equally psychopathic ai that they tell what to do.

The biggest danger from AI is the majority of workforce being made redundant.

With proper regulation and taxes it could lead to a comfy universal basic income future, but what we're going to get instead is a huge part of the population having to aggressively compete for increasingly shittier and increasingly less necessary jobs. Probably along the American working poor model of having several part-time jobs with no benefits adding to 60-70 hour work weeks.

There of course always WILL be jobs, because even if you don't need human workers you need to occupy as much of their time as possible and provide basic sustenance in order to prevent an uprising.

zerohedge.com/news/2017-11-26/sweet-dreams-elon-musks-periodic-reminder-coming-ai-apocalypse

This fag is trying to meme now... He's on a fear propaganda binge.. hahaha.
Ambien and Merlot got dis boy tossin in bed.

He used a Nate dog music video in relation to his regulation pleas. He's really jumped the shark.
Dis fuck boi shook
youtube.com/watch?v=yoYZf-lBF_U

He's really pulling out all the stops towards the close of the year.
> mfw something spicy is coming

> It will happen when the weather cools
> The plans laid long ago.

There's a big problem here, yea, but implementation of it has far bigger problems than ethics also. Drone with a gun seems close, but it's actually rather far away if you want it to fly by itself and recognise targets by itself.

humans are irresponsible. they create but only afterwards manage to think about the possible consequences and necessary regulations.

don't regulate AI, just let it do whatever and let humans go extinct

Regulate is just lingo for seizing power. Which is why Democrats are so desperate to grab onto NN.

Also Elons a joke. Stop giving him relevance.

That webm is from the first or second episode, it started out kind of cool but was total garbage overall

No, seriously, what's it called though?

Wow. You know you've worded this so you'd rather kill a bus full of children rather than bite the bullet for your shitty driving

well, he's not wrong about this. but then again, i think the govt shouldnt give oil companies or electronic car jews or their purchasers any subsidies.

regulate the joo cars, now!

>itt: jobless, jealous idiots bash successful entrepreneur and think they know better

A car would never make such decisions, because they would often require human-level deduction skills ("who's at fault here?"), and because it would place the responsibility on the car manufacturer. People like you who assume self-driving cars will come with built-in solutions to moral dilemmas that are still debated today are just dumb.

Of course I would. Fuck the children. I don't owe them anything. If a busload of children has to die so that I can continue to live, so be it.

I don't care about somebody else's children, cuck. Do you also provide for your wife's son with Tyrone?

Regulation is the problem. What happened to the free market? It should be allowed to develop competition and then approach public safety without costly and hindering regulations.

>I can't think for myself so I believe a successful entrepreneur is always correct.
You who shirk responsibility are a very serious issue for society.

>i take random tweets form faggots as gospel

>he thinks a literal robot wouldn't know about AI

The thought that half those children will amount to more then you'll ever be is the sad fact of the matter. Its always the great and innocent that go first

...

yo wtf?
get rid of the FAA so we can get cheaper flights!
The competition is good and the FAA's restrictive bullshit kills competition and drives up the prices

>ywn be bullied by your AI gf

It's doubtful, but at any rate I don't give a flying fuck what they're gonna amount to. In any case my car is to choose a scenario where I live, everyone else dies. Fuck the children and let them rot in hell. My car, my rules, if it doesn't do everything in its power to save my life, I don't want to lease it.

I wonder if they'll be able to peg you for manslaughter? You know the shit people will be able to get away with by just blaming the AI.

If a bus driver is about to plow straight into me, it's his fault (or his AIs). But your butthurt is delicious.

Musk doesn't know shit. he's just rich and pays people to do all that crap for him.

No.
We are already regulating robots, if you think otherwise, you are an idiot.
As for regulating AI, it depends on what he means.
We already regulate software a good example is safety systems.
When you hit the emergency stop, you need to stop the system. Laws for latency makes sense here.
"Regulate AI" means nothing, so I cannot agree or disagree with it.
Elon Musk cannot even be consistent the way he uses the term.
Some times, he means a super intelligence (AI as understood by science fiction writers), some times he means artificial intelligence (as understood by AI researchers).

A better question is: WHY do we need regulation on AI?
We are more than 50 years away from the beginning of what he is afraid of, so why do we need to regulate it?
Is it just to stop competition?
When is AI a "public risk"?
It seems like he is willfully ignorant about what AI is.

Gib anime!

>wdn't
Our fellow INTJ fear the grammar nazi AI.

I'm fine with AIs as long as I get bullied like in

Why should I disagree with Elon?
He's the modern day rennaisance man visionary genius, not me.

Bukkakeshitty Acterribles

I see this argument being brought very often.
It is the same premise as the movie "whishmaster".
A girl asks to be beautiful forever, she gets turned into a doll.
We ask a computer to make as many condoms as possible, it makes condoms out of people.
The problem with this logic is that we are very good at controlling computers and it is very easy to predict the outcome of computers.

If I run a program on my computer, I can expect that the visual output on the screen might change, maybe it does something to the files on the computer.
Maybe everything is uncertain within the range of what files might change and what might happen to the screen.
But I can easily predict that the application won't put food in my oven, it is clearly out of the scope of the application.
Stuff doesn't magically happen just because you don't understand it.

>butthurt
>implying it wouldn't be your ai.

I don't believe we are close to reaching AI, but I do think every effort needs to be done to stop it. I'm even ok with an anarcho primitivist world if that's what it takes.

>Sup Forums fucks with AI
>it goes full 1488 GTKRWN
>no one stops it

not sure if want

No, I do not.
Idiotic liberal scum.

God I hate this faggot.

>let humans go extinct
if only it were so easy.
best bet are hacked nanobots controlled by malevolent AIs doing it.