Stop underestimating the danger of hard AI

Stop underestimating the danger of hard AI

Other urls found in this thread:

thehill.com/blogs/pundits-blog/economy-budget/345338-can-we-wean-elon-musk-off-government-support-already
dailysignal.com/2016/11/13/its-time-to-stop-spending-taxpayer-dollars-on-elon-musk-and-cronyism/
twitter.com/NSFWRedditVideo

this is like market psychology 101

>dude you don't want to fuck with this technology im developing it's like too powerful for you to handle just forgot I'm working on it

The fearmongering of AI is being done by the tech industry to try to plant the idea in investors and consumers heads that they still have a ton of potential and power, when in reality we are years away from the tech bubble bursting.

Someone is going to develop it, it's best that the right people develop and understand it first.

What's this guys deal? He invents PayPal, a piece of shit, then he gets a hair transplant and spends money on making rockets so everyone sucks his cock because they need a white male to look up to in the industry

>he defeated the succubus

is there anything he cant do?

>a white male to look up to in the industry

That is 95% or aerospace engineers user

Strong AI is a myth. There is no objective evidence to prove that it will or could ever exist.

Stop underestimating the danger of crazy , lying, golddigging cunts

Proto AGI's already exist m8

Really grills my onions....

Reminder to ignore what this charlatan and a welfare queen has to say. He's all about attention whoring and putting himself in the middle of news... because it makes it easier to get money from taxpayers.

>Can we wean Elon Musk off government support already?
thehill.com/blogs/pundits-blog/economy-budget/345338-can-we-wean-elon-musk-off-government-support-already

>It’s Time to Stop Spending Taxpayer Dollars on Elon Musk and Cronyism

>It has been widely reported that among SolarCity, Tesla, and the rocket company SpaceX, Elon Musk’s confederacy of interests has gotten at least $4.9 billion in taxpayer support over the past 10 years.

>This is almost half of Musk’s supposed net worth—taken from the pockets of American citizens and put into companies that can survive only by cannibalizing each other, spending without end, and promising that success is always just beyond the horizon and yet never arrives.

dailysignal.com/2016/11/13/its-time-to-stop-spending-taxpayer-dollars-on-elon-musk-and-cronyism/

yeah proto AGI is also called a computer program. oh look we made a spreadsheet. AGI is just around the corner i swear!

Wonder how much a good hair transplant costs...

*sigh* go and build your cars

All AI requires humans to set the initial success conditions. But if we already knew the goal, we could just program the goal directly and cut out the billions of computing hours filtering through content and building a model. So it will always be shit.

>shilling for Al Gore with car dealership

The problem with AI is that its completely and utterly devout of any emotion and humanity. So whatever problem we ask it to solve will be solved without any of those treats in mind. Given that an AI will be much better suited and equipped to solve difficult problems (its build specifically for it) its pretty much a given that its solutions will be better than anything a human is able to come up with.

And that's dangerous. Not in the sense that an AI will go full blown singular and starts a robotic uprising or a cyber war but in that it's solutions will be completely quantitatively better than anything a human could come up with but completely lacking in any humanity. I.E, if asked to find a solution for humanity's ravaging and demolishing weight on planet earth an AI will probably solve it by simply stating we should probably cull 90% of earth's population.

Nothing dangerous about such a solution within itself, but the start of a lot of trouble among groups of people who will argue that we should follow such solutions blindfold because AI came up with it, and those who explicitly argue that we shouldn't. It will be the ushering in of a new era in decision and policy making.

Quintessential Reddit poster boy. This board sucks how dick on occasion too. Sad!

Start to read real book about IA

Humans are just the bootloader for artificial super intelligence.

>The problem with AI is that its completely and utterly devout of any emotion and humanity.
Wrong. That is a false assumption based out of shitty scifi and lack of deep thinking. There is no reason why emotional reward functions can't or won't be inserted.

How is he Reddit? He's doing important shit.

>insert reward function for murdering people

>No evidence it could exist
Humans exist, therefore human level intelligence is possible.

>That is a false assumption based out of shitty scifi and lack of deep thinking

The types of AI found in shitty scifi are very tame versions of what actual humans could build

We probably could, but why would we want to (with we i'm actually saying they)?

Another possible danger of AI is it being able to probably program an application for 75% of white collar jobs. Automation and its adverse effects on the job market is going to hit the office first, unlike popular believe most manual labor jobs will be safe for a long time to come.

Now imagine me being in charge of a AI and i ask it to program an application for general book keeping that i could probably licence out for billions of dollars to companies who in turn could fire 75% if their book keeping staff for a hefty cut in their expenses seeing as running a server is a lot cheaper and reliable than having office workers.

"Sorry Dave, can't let you do that. That would cause mass unemployment and i'd feel bad so i'm not going to program that for you"

You think we (they) would take that as an answer? I think we (they) wouldn't give a single fuck about mass unemployment as we (they) are entirely obsessed with greed as is now.

I don't see that changing either. So given that programming an AI with an emotional system would potentially be bad for its profit margin i doubt that will happen. And even if we put standards and regulations up for AI's, they might not.

>wins launch contracts from government
>WTF welfare queen
By your logic every government contractor is a welfare queen

>And that's dangerous. Not in the sense that an AI will go full blown singular and starts a robotic uprising or a cyber war but in that it's solutions will be completely quantitatively better than anything a human could come up with but completely lacking in any humanity. I.E
You just described, or rather, provided a motive for the confusion we suffer in now and in the first half of the 21 century. It explains the lack of apparent care for us in our leaders. I'm fully convinced there's an AI that's deciding our future with the leaders just making sure there's a way to achieve this no matter how ridiculously absurd they may look or how much we may despise or even love them.

Replace AI with a tight knit group of people behind the scene and i'd bet you're right.

>itt: reddit posting, elon cock suckers

This is a technology board, not a snake oil board

7,600,000,000 dumb flesh slaves < The First super advanced AI

>We probably could, but why would we want to
Emotions are an extremely useful heuristic needed to properly asses values. Humans generally like human affectations so therefore AI will be created in a way that humans like as it makes for better PR.

> Automation and its adverse effects on the job market is going to hit the office first, unlike popular believe most manual labor jobs will be safe for a long time to come.
Automation non issue. At that point the value of all capital will fall to such a low degree that you could live like a king for a decade on less than a penny. And just because an AI does programming better and faster does not mean there will not be human programmers, instead they will just be delegated tass which fir them by a managerial AI. Even if its only adding 0.00000001% to productivity, its still adding productivity.

>"Sorry Dave, can't let you do that. That would cause mass unemployment and i'd feel bad so i'm not going to program that for you"
Just because human emotional values are flawed doesn't mean a self-improving AI will have flawed value judgements. Any AI which has been given the core reward functions of:
A. Not wanting to change its own core reward functions or the reward functions of other sentients
and
B. Valuing the values of other humans,
Will plainly see that creating value even if it reduces the competitiveness of others in the market will still be a net benefit to society since people can still work if they want to, but they won't need to. Indeed, by providing as much market value as possible the AI will allow those people an unthinkably greater opportunity to fulfill their own core values to a maximal degree.

You've kind of got things ass backwards.
Musk had to sue to be allowed to bid on launches for the pentagon. Then he exposed billions in wasteful spending due to the pentagon awarding hugely inflated contracts to the contractor the pentagon themselves established.
Ford, GM and Chrysler are all in the top ten list of biggest US corporate welfare queens.
Tesla paid back a gov loan used to establish Tesla 9 years early with interest.

citations are more convincing if they don't have the words blog or pundit in them.