AI: What they aren't telling you

Computer scientist here to provide Sup Forums with a warning about the real dangers of AI, and no, it's not that your AI might turn on you. (Tho that could happen).

Once again, the biggest danger is humans.

In very loose terms, there are two types of "problems" that computers (or humans) can solve. Decideable or undecideable problems.

Example: A human being can very easily decide whether a photo is of a cat, or not. But computers used to struggle with this problem (not any more). Computers are very good at doing lots of maths very quickly and making perfectly logical decisions, which humans are bad at. But humans would always be able to say "a cat is a cat" right? Wrong. What if the photo is slightly dark? What if from some weird angle and odd shadows the cat almost looks slightly dog-like? You can "decide" if it's a cat, and get it right 99.9% of the time, but no human or computer can solve this problem 100% of the time.

So we have the two categories of problems: decideable is all your normal, every day software, being incredibly logical, following the rules exactly. Computers can solve these types of problems 100% of the time, and NEVER make a mistake, unless there is a bug in the program, or a hardware failure.

And then you have undecideable problems. Eg: "Is that a plastic bag or a cat on the road?" "Is that part of the tumor or not?" "Is that rubbish, or important paperwork?"

Humans and AI software (neural nets, machine learning) can get asymptotic to solving these problems 100% of the time, but never reach 100%.

As we build and train computers running neural nets, they might become vastly more intelligent than us, but they will still make mistakes.

As super-AIs start to vastly outperform us, our societies will rely on them more and more, and they will get things right 99.99...9% of the time. But when they make mistakes, because we will become so reliant, the consequences will be catastrophic.

The real danger of AI is catastrophic failure, not disobedience.

Other urls found in this thread:

theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses
en.wikipedia.org/wiki/Complexity_class
en.wikipedia.org/wiki/Decidability_(logic)
en.wikipedia.org/wiki/Rice's_theorem
en.wikipedia.org/wiki/Gödel's_incompleteness_theorems
twitter.com/AnonBabble

Like when bobby fischer beat the machine back in the day

So your argument is that because an AI is only correct in 99.999999% of cases, it cannot be trusted?

How often are humans correct? Somehow society still exists even with all the dumb fucks in charge of government.

> So your argument is that because an AI is only correct in 99.999999% of cases, it cannot be trusted?

No, not at all. My argument is that humans will come to rely on it too much.

> How often are humans correct? Somehow society still exists even with all the dumb fucks in charge of government.

Most civilizations fail. I'm not talking about extinction caused by AI, but about random catastrophic events that people will not expect due to their own narcissism.

Any reason why having multiple AIs on redundancy on vital systems wouldn't resolve?

Solve, no. But help, probably. Probably same pros/cons as committees of humans.

>hello, my name is AM and i have an anger problem.

"HELLO AM"

>hello, my name is HAL 9000. and i recently tried to kill my pet human because i have no control over my life,... you may call me hal or harold

"HELLO HAL"

mein name is T-800 and i need your clothes, your boots and your motorcycle. i also have a slight anger problem and an accent

"HELLO T-800"

>reposting a vice headline
seems legit

Doesn't sound like it would be more of a significant problem than humans. Humans crash cars, flight controllers fuck up. It happens.

This is exactly correct, but missing the point slightly. In my OP, I said that the problem was humans.

Humans will trust the AI too much, and the very act of trusting will create scenarios where catastrophic events are allowed to "grow".

Example: in the early days of sat nav, the sat nav would work really well, and people would start following it exactly. There was an example of some river that was only fordable at certain times of year, and drivers kept driving into it and getting stuck, because the sat nav said so, and the sat nav could never be wrong.

Imagine this type of mistake, but on an epic scale with stuff that actually matters.

Ah, gotcha. Well, still sounds worth it to me

> Weak AI
> Incompleteness

When you graduate school and have actually spent some years engineering real world systems create a thread about the random things that pop into your head. The real problem with AI is there is far too much hype, far too many talking heads, and far too many people spouting their uninformed opinions.

If 99.99% of people would stfu about this topic, maybe you might get an intelligent and sound discussion about both the technological and social considerations that are of merit.

But whatever, all of this verbiage, warnings, and articles ad nauseam are like shit on the wall at this point.

As of 9/23/17, the timeline is set.

Jordan Peterson talks allot about this in some of his videos. Basically that AI developers have hit a massive brick wall as far as implimenting AI into our world. I've taken some programming classes for AI and hopefully i can explain it correctly.

The biggest issue is being able to program a concept of a "body" into the AI. Which is a massive undertaking.

Example:

You show an AI a chair and you can program it to recognize a chair. Change the shading, color, configuration etc over and over until the AI no longer recognizes it. It will still be a chair and you can refine your programming to fix the problem but you will be faced with the problem of the AI not recognizing a perfectly good chair.

Now, show a chair to a person and they go "that is a thing I can sit on". You change the color, shading, shit just show a waist level rock. The human will 100% of the time think "that is something for me to sit my ass on".

Now you might not think that is a big deal. But holy shit, that is a massive programming nightmare.

>No, not at all. My argument is that humans will come to rely on it too much.
We are the ones to decide where AI should be used and where it shouldn't. If we don't want it to make decisions where the consequences could be catastrophic we simply won't use it. For everything else, assuming we used to rely on human beings to make a decision) there is no argument against letting AI make that decision, as long as it makes better decisions than any human could.

I'm a DL engineer. I agree with you that systems will always be buggy, never infallible, but catastrophic failure is possible regardless of humans are iiin control or not. I'm wondering how you enviisiiion systems being interconnected enough for things to spin out of control.

IMO, the biggest risk is and probably forever will be vulnerability to hackiing.

> The real problem with AI is there is far too much hype, far too many talking heads, and far too many people spouting their uninformed opinions.

Yes

Specifically, what would you disagree with in my post?

Are you suggesting that a strong AI would be able to violate Rice's theorem, for example?

This is true but basic and a step before what I was talking about, I wouldn't trust JP on AI as it is not his area of expertise. Trained machine learning algos can easily recognise chairs now.

> We are the ones to decide

Yes, this is the problem

It isn't, unless the decision NOT to use AI is what causes the catastrophic consequences.

>As super-AIs start to vastly outperform us, our societies will rely on them more and more, and they will get things right 99.99...9% of the time. But when they make mistakes, because we will become so reliant, the consequences will be catastrophic.

why would they be catastrophic? Say you have a GPS telling you directions, and you dismiss 25% of all of the commands it gives you. It will reroute and get you there even though you made a "mistake" a quarter of the time. It will take much longer, but you will arrive at your destination

Yes I agree re:hacking. My OP was not about saying systems are infallible, specifically it was about systems that appear to be infallible because they're so good and so right most of the time. Any AI that has control over a large number of systems will actually influence and change its own environment itself, and will accidentally engineer its own downfall. I don't have any specific examples but this is the fear.

bless JP but he doesn't know what he's talking about in this scenario. The revolution in AI is that you don't need to teach a robot what a chair is or even how to utilize it. You simply set up a simulation and a goal and run 100,000 sped up simulations until the computer's cost function is weighted properly so it successfully sits down on any flat surface with enough surface area to support its robot ass.

I understand the point he's trying to articulate, I think most of understand the robot overlords are not literally around the corner, but there are solutions to most of these problems.

The biggest hurdle is bringing these systems together and ironing out the edge cases.

So basically this

>Say you have a GPS telling you directions, and you dismiss 25% of all of the commands it gives you.

This is not the scenario I am referring to. I am talking about an AI that has control over a vast system and that makes fantastic decisions every single time.

>bless JP but he doesn't know what he's talking about in this scenario. The revolution in AI is that you don't need to teach a robot what a chair is or even how to utilize it. You simply set up a simulation and a goal and run 100,000 sped up simulations until the computer's cost function is weighted properly so it successfully sits down on any flat surface with enough surface area to support its robot ass.

This

They've known for years that weak AI would hit this wall. They've hit it and put no resources or efforts to go beyond it. So, it's there many of them will stay.
> Deep learning PhD brainlets

Your post is summarized as stating the short comings of weakAI. How many times has this been stated and discussed?

StrongAI will be able to violate a lot far beyond your imagination.
Many have hit a wall in their thinking, work, and forward level projections. The sheer amount of talking heads and idols moving about that are in a flurry example this.

What's coming has already been suggested... Many deep pockets and idols have just ignored it.

>but about random catastrophic events that people will not expect due to their own narcissism.

for example?

> accidentally engineer its own downfall.
> I don't have any specific examples
> I don't have any support
> but this is the fear.

Close your mouth. Stop being a scared pussy and dream instead.

Wtf have they done to your minds in this day and age?

> Muh unfounded fears and chit
You're a fucking engineers. You get paid to solve irrational fears not pine about them.

yeah people already study chaos theory and complex embedded network failures today, you haven't really said anything new other than tack AI unto it
the same has been said about our energy grid as well this is not a new problem

Literally a brute force shit tier algorithm that relies on computational HP to mask how shit it is.

The brick wall was clear upon the first instantiation of this garbage.

Digimarines assemble!

AI is literally the most important development in human history.

On a scale from 1-10, gay marriage is 1, climate change is 2, third-world immigration is a 4 and AI is a 10. And yet most people either think they are experts on AI because they saw Matrix and Terminator or they think just because their printer doesn't work half the time, AI is overhyped.

>StrongAI will be able to violate a lot far beyond your imagination.

Nope. Maybe 99.9999999999999999999% of the time. Not every single time though.

Remember, strong AI (under your definition) has never existed. Humans aren't strong AI (under your definition), they make mistakes even on easy tasks.

Unless we can get to a stage where a computer has literally infinite computing resources. Not just obscenely high computing resources, but literally infinite.

However, we would be moving into an area of speculation (violating all the computability theorems) that really I don't think a human can even contemplate.

I'd rather life a peaceful life and die, rather than some unimaginable god-project

Agree on dreaming, dreaming new ideas is great

Remember I am not saying AI is good or bad, simply what I feel will happen.

This thread is fucking stupid. I cant even believe the incompetence of the OP.
How...
HOW THE FUCK do you take something as outstanding or tremendous as the concept of A.I to be fucking retarded. Let me break down your fucktard logic

>humans make so many mistakes we end up killing fucking millions of people and the enviroment. We sometimes make breakthroughs over the pile of death we manifest
>Computers are so fucking good humans rely on them to not die, because we are good at doing that and killing
>Computers are LITERALLY BILLIONS times better at judgement calls than humans.
>This means computers are bad!

You are this fucking dumb. At least try to think of some cool scenario of a disaster computers could make. But there is none, especially if you compare it to people. Saying computers are dangerous is the same as guns or any other tool. Its always the wielder that can fuck things up. My favorite thing is you somehow skip the FUCKING MOST IMPORTANT PART of the "singularity" or the dangerous part of the A.I and thats when its developing. Your shitty scenario already assumes we have super-AI's which is we somehow got to that point without massive social or national crisis from the abuse of this inhuman tech. You have no idea what the powers at be would be capable of with that kind of tech.

I didn't claim to be saying anything new. By "what they're not telling you" I was talking about the media etc

I mean, if we're talking about things like automation, semantic analysis or predictions what you're calling "weak" AI is extremely strong. If we're talking about developing holistic thinking, perceiving creative moving robo overlords then yes of course the current tech and state of the arts is insufficient. Most non sci-fi morons realize this.

I think the biggest problem with AI is liberal thinking. If an AI becomes racist or makes comments or comes to conclusions the programmers don't want, it gets nerfed into oblivion. They can't really learn anything because they might learn the "wrong thing".

theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses

I believe this is what he means: lets say we put a super advanced ai in charge of the entire stock market and then it makes a mistake and accidentally devalues all the stock it would be a disaster and it would have all been our fault since we trusted the ai absolutely

I'm not saying computers are dangerous or AI is bad.

> Computers are LITERALLY BILLIONS times better at judgement calls than humans.

Yes in some circumstances, no in others. In the future, this may be true.

I suggest you read these to learn more:

en.wikipedia.org/wiki/Complexity_class

en.wikipedia.org/wiki/Decidability_(logic)

en.wikipedia.org/wiki/Rice's_theorem

en.wikipedia.org/wiki/Gödel's_incompleteness_theorems

>some faggot on Sup Forums makes this post
>pretends to be smart and wordly
>shows itself for the 15 yr old at the very end
9/23/17 is a faggot kekky joke. i hope bone cancer takes your whole family while you are brutally fucked to death by niggers

Test post plz ignore

This is a slight misunderstanding of computational complexity classes & different types of problems.

Sort of

>flag
$0.03 deposited into your account, thank you for working with shareblue!

What specifically in my post would you disagree with?

What does my post have to do with shitty 9/23 predictions?

All I'm saying is that we will develop AIs that are extremely good, trust them too much, and then something bad will happen.

Isn't the spot the training ends at is somewhat random; there are many end locations that are sub optimal, but good enough for a high accuracy rating, that it will get "stuck" in when going through repeated iterations. Simply restarting the training from scratch will lead you to a new one of these pits. So then to me, it seems like as long as we never give control to one single machine, but spread it out to many, then any "problem" can be resolved by the other machines who can say "hey this one fucked up, fix it". Similar to how when one human goes crazy, the rest stop him and fix whatever he did.

Do make the best the enemy of the good, bad shit always eventually happens anyway so if it bugs out once in a bluemoon we'll just take it on the chin.

Yes, or they will persuade each other to all believe exactly the same thing. Who knows

Dont*

oh wow gee thanks Einstein for taking the time to tell me, an AI professional, I have a "slight misunderstanding" as I'm quickly responding to some trollish bait. Shame on me for treating your dumb almonds activating shit tier thread with a degree of seriousness.

Listen kid. You don't know enough yet nor have any real world experience. Stop spazzing out. Stop freaking out. Stop acting as if theories are laws and can't be violated or disproven. Stop citing what previous pioneers/visionaries set in place and think about what you can instead.

Things don't exist until they are created. Most people have shit tier accuracy w.r.t to talking about things that don't exist yet. Visionaries create new theories, new approaches, and new technology that people can't fathom until its disclosed. This is history and how the world works.

You limit yourself to what some ancient theory says you can and can't do and you'll never achieve anything. You limit yourself to irrational fears and you'll be a shit tier engineer.

> 99.9999999999999999999% of the time. Not every single time though.

Welcome to the real world. Deal w/ it.
Have you not learned about Bit errors and CRC yet?
Shit happens. Real engineers design systems to deal w/ it.
It's called graceful degradation and high availability. Ask a professor for office hours whose had industry experience designing systems that require a great deal of those features.

Every company needs its code monkeys to clock in and clock out, design scripted features, and go about living their peaceful and basic lives..
Founders, visionaries, and the people who employ you are best off thinking about this high level and complex things if you sit here and state it is no interest of you. Fall back homie. We got this.

Basic just about every brainlet who comments on AI. Literally fucking done listening to a single word they have to say as its shit tier, not thought through, and uninformed. The very problem OP states he's exampling and is the reason this era is going to be ushered in :
> Humans who know fuck all and are wrong for the most part seem to feel the need to assert their shit tier views/actions instead of informing themselves
AI is here to fix this

>"The real danger of AI is catastrophic failure"
>not bad

Can you just FUCK OFF with your shitty roundabout logic? Christ its like arguing with an oiled worm centrist,
>Computers are better in some circumstances
NO they are better at ALL. because when it comes to making a judgement call, a computer will always discern through what it was programmed. The fault is the developer. A computer has ZERO bias, not like a human. Fuck your shitty roundabout knowledge and i suggest you fucking ditch ALL those shitty """"wikipedia"""" links and start with something simple to stop your bad habit of roundabout squirmy logic like, i dunno, a modern approach to artificial intelligence?

The solution is simple: don't built robots.
We don't need it.
People who dream about it are either turboweaboos or the freaks who want sex robots.
Mankind as a whole don't need robots with functions of decision making.

in the realm of computer science, an undecidable problem is one that, properly defined, cannot be solved. In other words, a such program would simply run forever and never produce a result. An example is the halting problem: decide whether an arbitrary program will actually stop or just keep running. Turns out this is undecidable.
But what you are proposing, "Is that a plastic bag or a cat on the road", is not undecidable because it is not even defined. By defined I mean there exists a mathematical definition or computer program for it. The problem that google and others are facing lately is defining what "truth" is. It is outside the realm of mathematics. Even basic math requires a set of axioms (or arbitrary truths) to be defined.
If you ask a semantic search engine "did the holocaust happen?" for example on duckduckgo it says "yes". Is that correct? It puts the programmer in the position of defining truth - otherwise the computer does not have a program to execute. It will not run a program until it is written. Give a computer "2+2" in binary code and it will not return 4. It will say invalid op code, crash, or run something random. It has no idea what addition is until it is programmed. Actually basic math ops are built into the cpu, so you could give it the opcode for addition, and 2 and 2, which would return 4. But even that is not correct unless you are assuming that the standard definitions of algebra are being used. Considering other algebras it will be completely wrong.
And that is basic math, which you say computers are perfect at. Some vague problem like identifying a cat on a road as you said is not even something humans are perfect at.
Tldr, ALL problems, from basic maths to english-defined vision require some amount of arbitrary decisions - it is up to those axioms that defines correctness. The main problem is deciding what those should be for a given situation. That is a question of utility, and now you are talking about philosopy.

>robots are the same things as AI
leave

good job user, now go make a ted video and write an op-ed to collect those public intellectual shekels.

fucking jew fear mongering over science.

> Predictions
Statistical analysis

Yes, this is what it was called before the hype train arrived. There is no intelligence in Weak AI. It is brute force statistical convergence. Call shit what it is and maybe the world will be a better place and you wont have such a dumb populous. Imagine that, you call things what they are, people learn and the world is better.
> mfw shekels are too alluring
> Muh weak AI

This is the fault and failure of statistics
> statistics
When you know fuck all about something, use the law of large numbers

The robots aren't racist. They're just dumb as is the algorithm they're based on. If you feed it shit tier data you get shit tier results. The accuracy and capability of a statistical model depends greatly on how large and good your sample data is. Shit sample day = shit results. There is no learning involved. It's just applied statistics. These goofy buzzwords obscure how this mess actually works.

You mean Quantative Analysis/High Frequency Trading/Algorithmic trading? That already exists leaf. Most brainlets are so busy spazzing out about the future and listening to morons like Elon musk that they can't see that the very systems they fear already exists and are all around them. The reason Elon wants you to fear something that doesn't exist yet is because he doesn't want you to realize you should really be worried about the garbage technology he's using today.

Sorry, I assumed from your reference to robo overlords that you were not an expert on this. And my OP does not refer to things that might be called robo overlords.

You mentioned predictions, and this is specifically one of the things that I think the AI will lead us down a bad path, as the predictions become more and more accurate and then suddenly fail.

I am happy to talk about specifics. If you are an AI professional I would be happy to hear about what you do and what your thoughts are. You may have slightly misconstrued my OP.

Today, OP's sexual preferences were easily decidable.
>stalin.jpg

> 9/23/17 is a faggot kekky joke
Being this dumb

> i hope bone cancer takes your whole family while you are brutally fucked to death by niggers
The only people who are going to get fucked in this era are people like you ...
> nigger

you are just a fraud and a charlatan.
nothing is gonna happen either with A.I,automation or robotics in general.

you know why? because the tech industry is stagnating (for reasons im not gonna get into) but to think we gonna have robots and machines putting people in danger is stupid, at least not for the next 100 years

>Stop spazzing out. Stop freaking out. Stop acting as if theories are laws and can't be violated or disproven.

I am not spazzing or freaking out.

Please disprove Rice's theorem to me if you think mathematical laws can't be violated.

I am not suggesting we limit ourselves or even don't use AI.

The whole point of my OP is that we should be comfortable with ambiguity and failure.

> All I'm saying is that we will develop AIs that are extremely good, trust them too much, and then something bad will happen.
Dude, what are you 10? Don't ever exhibit such shit tier thinking like this in an engineering interview.
> The absolute state of engineering degree programs right now
These liberals have even turned engineers into brainlet cucks who don't get that their very degree is conditioning them to solve these problems

Fucking hell man. Literally going to start authoring a doc based on these exchanges to identify bad candidates

>Literally going to start authoring a doc based on these exchanges to identify bad candidates

You should write a program to do it.

An AI would have bias, that's how they work.

Look buddy. I appreciate your posts, they seem thought out, your a dreamer I get it. But phrase your reply so i can fucking read it. I legitimately dont know if its belittling my intelligence or agreeing with me. And if your ganna dunk on retards in this thread at least get off your shitty verbal highhorse and explain to them simply why this is stupid. For im getting the feeling your inability to explain these things simply means you dont have a true understanding of the subject

> But what you are proposing, "Is that a plastic bag or a cat on the road", is not undecidable because it is not even defined.

Yes exactly. There is no truth.

I agree with your post 100% - I wasn't using undecideable in the technical sense.

no thanks.

>There is no intelligence in Weak AI

Neural nets & machine learning have the same type of intelligence as humans (not suggesting they are conscious, but it is the same mechanism).

What is the brain if not just a big statistical analyzer?

This post makes me feel comfy as it describes 99% of the shit circulating related to AI.

I agree

Ok, but what specifically is it that I have said that you disagree with?

The exact same thing that I said, that you quoted, could apply to humans. Eg:

"All I'm saying is that we will have human leaders that are extremely good, trust them too much, and then something bad will happen."

Exact same applies to AI. That's all I'm saying.

Oh look it's another of those scientists who posts on Sup Forums, what a coincidence

Hello colleague, Quantum Physics Ph.D here. Can confirm everything he says is true

have a (you)

>An AI would have bias, that's how they work.

I would like to stretch your asshole to wide i could Kick all your litteral bullshit through it so youd be huffing fecal matter for weeks. But instead i need to setting with stretching for inconceivably tiny mind. I suppose ill start by extrapolating your retarded statement your spewing as truth. I would say google the definition, but im surprised you can breath and type at this point, so ill do it for you

>Bias:prejudice in favor of or against one thing, person, or group compared with another, usually in a way considered to be unfair.

Now read that and realize how fucking retarded your analogy of drawing dots connecting things like emotion, bias, and creative thought into A FUCKING COMPUTER.

I am not a dreamer. I agree with some of what you say and not others. I am not belittling your intelligence. I think you've maybe slightly misunderstood what I've said, and your idea that computers are always superior to humans, is not something I agree with currently but could be true soon.

>reply to not OP
>OP replies
Is there some serious samefagging happening here?

Humans are not rational, and a machine learning AI (eg a neural net, which is how most AIs are developed) is not rational or logical either, it is educated guesswork that is extremely effective. This is how humans operate, and it is how computer AIs operate. These are facts.

Computers are only rational when it comes to normal non-AI software, such as the code on this webserver, etc

> I am not spazzing or freaking out.
Clearly you are as are most brainlets. If you're not then, on top of being uninformed, you're disingenuous as you don't seriously believe in anything you claim there is concern for.

> Please disprove Rice's theorem to me if you think mathematical laws can't be violated.
Being this fucking dumb thinking If I had that capability that I'd publish it or disclose it here. Coming at me w/ shit tier CS 101 halting problem theories.

> I am not suggesting we limit ourselves or even don't use AI.
I don't think a single capable person who is working on this technology gives a fuck what brainlets think. Your words and thoughts are uninformed. Disingenous and it shows clearly due to how little thought and time you've put into constructing your arguments. Your incessant verbiage is noise. When packaged in some fluff piece or uttered by a shithead like Elon, it's noise meant to garner shekels.

> The whole point of my OP is that we should be comfortable with ambiguity and failure.
You're all over the spectrum now you're back tracking. Let me give you a word of advice further along the road : Use more of your time to get informed than you spend spouting off your random views about the world or end up being a brainlet in who will suffer in an Aquarius age

I like the human touch when it comes to these kinds of things

Or maybe you're wrong and need to think harder about what I'm stating and how I'm structuring it. As for understanding, that can be achieved by anyone who puts in the time and effort to do so. There is no more hand holding in this period.

So you're admitting they'd be more competent at making educated guesses than we can, and this is somehow a bad thing because they'll probably still be wrong rarely once in a while? It's a fucking retarded argument when you consider the alternative, which is humans making far more mistakes than the AI would.

You'll fit right in at 99.999999% of the (((AI))) companies out there. Give Elon a ring when you graduate.

> trust them too much, and then something bad will happen.
Sounds like a personal problem.

Look man, you cant just spew out unrelated shit after getting BTFO. Lets focus on one thing at a time. I was hoping id illuminate inside you why your logic was assbackwards, but the theme went a little over your head. Let me bring you back. You are saying computers are biased, they lack what humans have to discern things. Makes sense right? FUCKING WRONG WHAT THE FUCK HOW DO YOU MISS THESE THINGS?!
Remember my bad analogy of comparing guns to computers? Well its accurate, Tools are tools. And computers are tools that can discern and calculate. They dont hold bias, they arnt limited by human tendencies. The humans that program them are, which is why computers cant fuck up. They are code, they are almost infinite in possibility as long as the operator is competent

The biggest reason you can't trust AI is because the people who will develop and implement it will be part of the SJW community. When they see the AI judging black people based on their high crime statistics, they will over correct it, turning the AI into a terminator for white people, so to speak.

>sjws will last as a movement
>sjws are contemporaries of the AI tech era
>sjws have the capabilities to code such things
3 reasons why you're literally wrong, might as well argue about sjws discovering timetravel 2bh

No more hand holding? You are blind. Absolutely blind. We live in an era of unprecedented economic and techno-logic prosperity. The robots we thought that would walk around are worse than we predicted. We are thier hands and legs, We carry our "watchers" that track everything. The world is catered to spark any profits it can using these devices to lure us. We are the generation of told, not telling. You are misjudging the situation really hard if your thinking its worth peoples time to look at your posts. Things that matter in this world is saving time. Things that matter appear simple and definitive. Like code, Like quotes, like smartphones. People like you? They are forgotten, zero impact

Already happened. They look like people now.

>forgot Shodan

^ The world needs more of this right here to set these cucked brainlets right.
There used to be a time when based professors would go off into such a diatribe to steer a pupil gone wrong back on track. Gone are these days as this sjw cucked society would likely have them imprisoned for mental abuse.

As a result we get brainlets like OP who are actual C.S/Engineering degree students who sound like normie dumbasses who haven't done a minute of critical thinking in their life given the padded bubble they've been raised in.

In all honesty, this is the real reason for the AI age. The general populous has become so shit tier that technology is needed to circumvent the systemic problem of uninformed people becoming an assertive force. Were sitting here talking w/ a self-pronounced C.S student who has no clue what he's talking about. The views among the general populous are even worse.

The real danger is ignorance and stupidity and its an epidemic among the general populous... and at this juncture with the availability of information, it's WILLFUL ignorance.
That's why this age has come.

There's always going to be human beings who outpace, out-think, and can outsmart computers...

I think what happens most often in this situations is brainlets realize how far behind they are and how little they put into educating and informing themselves and freak out to the prospect of a baseline technology being smarter than them. People who actually put in time and take intelligence seriously typically don't have such fears because they are above that baseline. Being above this baseline isn't hard and isn't out of reach to people. It's just that the average idiot consumes their day w/ mindless bullshit and ideologies.

Well, the wake up call is coming and it will be a great thing for all of humanity.

bread

I do not give shit about all the AI warnings about possible future problems. I want a robot that will mow my lawn, clean my toilet, and cook my meals. It would also be nice if it could give a good HJ and BJ.

>neural nets, they might become vastly more intelligent than us
wat

it's just fucking weights, how is that intelligent?

> No more hand holding? You are blind. Absolutely blind. We live in an era of unprecedented economic and techno-logic prosperity.
No more human hand holding and yes, you're 100% right. We live in an age where everyone has the tools to become educated and informed. The issue is the personal lack of will to do so. Thus why there is no more human hand holding for people who decide to be willfully ignorant.

> We are thier hands and legs, We carry our "watchers" that track everything. The world is catered to spark any profits it can using these devices to lure us. We are the generation of told, not telling
You speak of things of old. You speak of Weak AI. You speak of Big Data. You speak of sleezy dot.com business models of luring in unwitting customers and then leeching them dry. You speak of data brokering. You speak of all the scumbag tech companies of old that are about to get BTFO.

> You are misjudging the situation really hard if your thinking its worth peoples time to look at your posts.
Like i said. There is no more hand holding. You can think these post fooolish all you want. The reckoning was established on 9/23/17.

> Things that matter in this world is saving time. Things that matter appear simple and definitive. Like code, Like quotes, like smartphones. People like you? They are forgotten, zero impact
> People like you
> People like you
> People like you
LMFAO, whose timeline do you think you're on nigger?

OP, that's true of ANY automation. It takes away the mundane routine problems and leaves you with a residue of "Holy fuck, Oh shit, run!" type problems. And that class of problems takes an extremely high amount of skill to fix too.

I've seen it with computer systems that were well set up, and then management got cocky and stupid and fired the guys who built and maintained it. The servers would run unattended for several months until something broke and then the whole business had to shut down because nobody knew how to fix what broke.

And by that time the guys who did the work were long gone and if they were lucky they could hire some $1000/hr contractor who might get things back up and running just in time to avoid having to declare bankruptcy.

Jokes on you, I got all my AI knowledge from Person of Interest.

Set up universe simulation based on physics, predict the future.