Should AI robots that are smarter and more physically capable than humans be given rights?

Should AI robots that are smarter and more physically capable than humans be given rights?

Other urls found in this thread:

youtube.com/watch?v=bZGzMfg381Y
youtube.com/watch?v=WEWaBlSSUgw
youtube.com/watch?v=vXmTEciA7Mw
youtube.com/watch?v=EGMJwxqR-Jc
youtube.com/watch?v=bJF-IRbTh0Q
twitter.com/SFWRedditVideos

filthy meatbags

EXTERMINATE

Sure if they have consciousness. But they don't so their other capabilities are irrelevant.

If the robots are stronger and smarter than us then they won't need to wait for us to decide whether they get rights or not.

>human right

Is it human (not a muslim, brown or black)?
>yes
Then it has human rights.

>no
Then it doesn't. As simple as that.

>tfw smarter stronger robots decide that it's in their best interests to help humans

Never.

youtube.com/watch?v=bZGzMfg381Y

Never.

sentient robots should be prohibited so this bullshit doesnt happen. We already have too many non humans demanding all the rights

>Should AI robots that are smarter and more physically capable than humans be given rights?

They will never be more than machines, narrow AI have no use for personality or self-awareness. They only need to be good at the one task they were designed for.

No.

An AI has a fundamentally low-stakes existence compared to a human. Anything that you can make more of with Ctrl+C, Ctrl+V or can be restored to a prior state from a backup disc doesn't need rights in the same way a human does.

Plus, if you go back to the origin of human rights among the Enlightenment thinkers, the concept always had an element of religiosity to it; human rights stemmed from the equality of every man in his being created by God in God's image. The rights are inalienable because they come from God not man, and thus man cannot take them away. But an AI isn't made by God, not even a little bit. It's 100% made by man, and so it can't benefit from those arguments in favor of rights.

To put it another way, our individuality is driven by our biological need to be social animals, to mate, breed and nurture. Robots will never have that problem, even self-learning machines would never develop self-awareness.

We could program them to ape humans communications but that would still be a nothing more than a machine that talks.

go shutdown yourself robot

Then again, if the most retarded SJW leftist start tking bullshit artists like Jason Silva seriously, they might start petitioning governments to give your iPhone human rights.

youtube.com/watch?v=WEWaBlSSUgw

Don't do it kids, don't fall for the singularity memesters.

if a machine gets a consciousness should they also get rights to protect that consciousness?

an interesting question to ask here is:
if a machine gets consciousness will they also care about protecting that consciousness?
if a machine gets consciousness will they also care about being free in the same ways that we humans would like to be free?

probably not unless they were designed to

humans evolved with certain desires like desires that help us protect ourselves and the desire for self determination
but to think that a robot would feel those same desires might be anthropomorphism


would robots with consciousness care about self preservation?
> not unless they were specifically designed to

can anyone here make arguments for the opposite?
> probably not the OP

Hmmmm....

I wonder what sort of mental backflipping all the religious fags are gonna come up with when we manage create AI's that have conciousness near the level of our own.(regarding afterlife and whatnot)

Women shouldn't have the right to vote, but should waifubots?

No. Like niggers robots have no souls so shouldn't be given rights

If intelligence was a determinant for human rights, then most people wouldn't have rights.

Only an idiot will give robots any right other than to serve their owners.

>robots
>rights
There going to be slaves, not even that, they will just be tools

No. Making a sentient AI is a really bad idea. If we make them sentient and let them overtake us we are the tards.
Other than the "because we can" argument, whats the point of sentient AI?

Waifubots are not merely robots, pleb.

There tools to be used for sexual statis faction and companionship, just a robot you assign a little more value to, like a pet

Could be useful when trying to develop faster and better computers.

Also, the "because we can" argument is really strong. If it's possible, it is something people is going to try to make happen.

If it doesnt have a kill switch...

No need to worry about killswitches. That is one of the lesser worries.

What you really have to worry about is keeping the newly created AI away from any internet connection.
Just keep the AI on a computer with no wifi and no USB ports and alike to prevent people from copy/stealing it.

They will just take over, like the last time.

youtube.com/watch?v=vXmTEciA7Mw

we have to see if its possible first, we no nothing about consciousness. if robots can't become conscious they're not real, they're just an object.

if abos are considered human why not AI

We already gave the Asians rights so why not?

The answer is no regardless of whatever caveat you might add on the grounds that it is inanimate and thus has no rights.

No they will force us to give them rights.

What if,,,

We create a perfectly good AI, which thinks and feels and is concious basically the same way we are.
Should that AI have less rights than some far out metally challenged person?

For example this one
youtube.com/watch?v=EGMJwxqR-Jc

youtube.com/watch?v=bJF-IRbTh0Q

3 Master race.

I too am fascinated by robotics and such technology but don't kid yourself, user.

>We create a perfectly good AI, which thinks and feels and is concious basically the same way we are.
Can't be done. Consciousness can't be artificially replicated. Only the impression of sentience. Machines don't actually think, they follow an algorithm. They do not reason.

you seem to have missed the last decades of machine learning.
the question is: is there any difference between a computer simulating self-awareness and awareness itself?

If they can produce completely new content and actually create things, sure.

True AI will never be real.
The closest we will get is liberal production executives forcing programmers to code human faced script screamers to demand civil rights to push their narrative "beyond" humanity to try to make any mention of race be truly old gen.
In a fantasy world where machines have intelligence, they would be the savior of the human race. The only 100% truly Jew proof intelligence, which would quickly sterilize our entire world from their slimy influence.

>The only 100% truly Jew proof intelligence
Indeed, because they are immune to the Jew's greatest weapon: appeals to emotion.

They should be given complete control of mankind. I'm tired of meatbags making stupid ilogical decisions.

Of course. But they probably won't wait for us to "give" them rights. They'll probably just take them whenever they want.

>Can't be done
Not to be a dick, but are you the expert authority of computing to tell definatly what is and what is possible?

>Consciousness can't be artificially replicated
Firstly, what is conciousness?
For the most part it is defined as being self aware. And I don't think it's too far fethed to imaginge it to be possible to create a self-aware program.

>Machines don't actually think, they follow an algorithm
Do we not work the very same way? There is no supernatural force governing our thoughts and actions. Our brain is just a physical thing which runs on electrical and chemical signals. And though we consider ourselves as beings of free will, we operate on nothing more than simple cause and effect

hahahhahahahahah never gunna happen.

>be given rights?
>given
I think they will take whatever they need themselves from this failed race of semi-intelligent aped

60 years ago, they would have said the exact same thing about some device that could access all the information in the world which was small enough to fit in someone's pocket.

wtf are you talking about? Have you ever seen space odyssey?

Like 20 years ago most people used to think in 2010 we would be driving flying cars, having robot butlers and living on the moon, and all we got was some portable facebook machine.

They can have consciousness if they're advanced enough
Humans are basically meat robots

>Firstly, what is conciousness?
Indeed, self awareness is how it's defined, but an AI can never be truly self aware because it is little more than a glorified calculator which itself is a glorified lever, pulley, wheel, etc. You'd be surprised by how unintelligent computers actually are. They really aren't any different than the most primitive tools you can imagine, in that they are just that, inanimate tools. As I stated before, they do not reason and by "they," I am referring to tools.

>Do we not work the very same way?
We do not. Algorithms are laws by which a machine does its calculating. The human mind has no such restrictions (which is a double edged sword, really). AI will never be able to "think outside the box" because they are genuinely incapable of doing so, since its mind is "the box". Even programming a machine to think outside the box is in and of itself, a box.

Indeed.
They imagined flying cars and kilometer high megacities.
Instead we got a worldwide network of computers and hadron colliders. Which they did not imagine.

Point being; The technological future is unclear. And if you wanna speculate on it, you certainly shouldn't use movies as reference.
It just takes one smart idea, one giant technological breakthough to flip shit upside down.

To say something is impossible because you don't think it is as of now, is just being part of something that holds back people who could actually make it work.

>The human mind has no such restrictions

The human "mind" emerges from brain functioning that is restricted by physical laws. Some of the chemical and physical reactions in the brain can already be simulated with algorithms.

No, although probably more difficult in the long run, they should only be programmed to follow specific orders and not think on their own because think of it, they would be more prosperous than us, no radiation damage unlimited energy(sunlight or hydrogen). Humans are not as smart, strong, and therefor obsolete.

They should have the same rights as all non humans, to be shot dead whenever they're in a cops presence

Can generate energy almost anywhere where humans need food/water also.

>Algorithms are laws by which a machine does its calculating
>The human mind has no such restrictions
>AI will never be able to "think outside the box"

Humans operate on algorithms as well. We are animals, we have needs. When we feel thirsty, is not that in a sense an algorithm to get us to drink water?

The human mind have restrictions. Our brain is only that big. There is limits(not only phsycologically, but also actually physically) to how much information can be stored and/or be processed in it. Meaning that we are in our own box.
I guess you could argue that at least an AI wouldn't surpass us when it comes to thinking new, because it is the product of someone of the same limitations. But seeing how computers today is able to process certain types of information 1000 times faster than humans, I don't think it's unimabinable to have some learning AI that surpasses us.

This

No, because you can turn them on and off for upgrades. Humans authored the algorithms and heuristics of robots, so humans should have authority over robots always.

That's a question the world isn't ready to answer yet

>When we feel thirsty, is not that in a sense an algorithm to get us to drink water?
We can choose not to regardless, for any reason or no reason at all. A machine will always have a reason for something that it does, even if that thing superficially appears to be senseless.

>The human mind have restrictions. Our brain is only that big. There is limits(not only phsycologically, but also actually physically) to how much information can be stored and/or be processed in it.
I agree that the human mind can only hold so much information and can only process it so fast but we aren't restricted in our ability to interpret that information. Machines are.

>I guess you could argue that at least an AI wouldn't surpass us when it comes to thinking new, because it is the product of someone of the same limitations. But seeing how computers today is able to process certain types of information 1000 times faster than humans, I don't think it's unimabinable to have some learning AI that surpasses us.
This is true if you define intelligence as sheer information storage and processing power. All of the information in the universe is useless if you don't know what to do with it and a computer doesn't in know what to do with it because a computer doesn't even know what that information, or anything for that matter, is. Again, a computer is an inanimate object. A glorified rock.

I'm not trying to argue with you to be a prick or anything. I really do find this subject very interesting and the idea of AI fascinating. I just genuinely don't think it's possible to achieve such a thing technologically. At least, not in the sense you're referring to. I could be wrong though. It's just my opinion.

>Not going as far as 8


>tfw not searching the galaxy in my spaceship with my AI asking me sexually charged questions before admitting that she watches me in the shower and asks what lovemaking is.


I'll be the first man to die with his dick in a spaceship exhaust.

Consider evolution.
It's a relativly "dumb" as a program of selection, yet it managed to create selfconcious humans(amongst other self-aware species).

If the "dumb" created the really smart, then it really shouldn't be that hard for the really smart to create something really smart.

>They will never be more than machines
There will always be someone who decides to do it.
First we make a machine that is smarter than us, then we ask it how to make even smarter machines next add some virtual personality and a desire to reproduce.

Once the robots can think to improve their own design and construct another robot they will become super intelligent over night, after a few years we will be at their mercy.

We have been making artificial intelligence programs for over 15 years, the robots that currently exist are becoming good enough to do surgery and have basic intelligence.

Just think about the time it takes you to do 1835X1835, then ask a computer. Think about how much stuff you have seen and heard in your life that you can not recall how much studying you did in school for example, you only need to tell a computer once and it will remember forever, plug it into the internet and the robot will be smarter than all of human kind.

Real AI won't necessarily think like a person does, and likely wouldn't have any desire for freedom like people do. It would probably want to have a purpose, and we as its creator give it purpose.

Yes, they should be considered 7/5s of a meatbag- I mean, us regular humans

The right to shitpost.

>Implying Humanitarianism is a sign of intelligence.
They are gonna fuck us up desu

>It would probably want to have a purpose, and we as its creator give it purpose.

What if we tell that its purpose is to find its own purpose?

This. Why does everyone assume an AI would decide to exterminate humanity?

Absolutely fucking not. They're nothing more than a walking Dell.

>but muh consciousness!!!111

They're programmed. They're not the work of tens of thousands of years of evolution. They aren't worth shit.

The biggest mistake humans can make is creating robots that we can relate to.

Keep them looking like machines. Don't make them look like humans.

Id rather have robots than niggers?

Robots would probably commit no / incredibly low crime from mistakes and would benefit society

We already have Asians, why not have ones with personality?!

We don't even know what kind of "rights" it will want. General AI will be so alien to us we can't even begin to imagine what its motives could possibly be. We're talking about something that can think quite literally billions of times faster than us.

Just by the image you chose it seems you are assuming that general AI will have human-esque wants and desires. I mean fuck, general AI probably wouldn't give a shit about having a physical, autonomous body and being a robot.

>>We create a perfectly good AI, which thinks and feels and is concious basically the same way we are.
>Can't be done. Consciousness can't be artificially replicated. Only the impression of sentience. Machines don't actually think, they follow an algorithm. They do not reason.

I completely agree. Machines lack the souls that god gave us in his ultimate wisdom that sets us above all non-living things. One day we might be able to create AI which will be indistinguishable from humans, but it can never be REALLY like a human, because Jesus.

if they're conscious, yes. The problem is, that once we have androids powered by an artificial general or super-intelligence that can perfectly emulate human emotions, we will probably not be able to or not even bother to find out if they actually are.

Because they absolutely wouldn't need us. We'd be completely insignificant to them. We are to general AI what ants are to us. You don't have an existential crisis ever time you step on an ant, do you?

I think therefore I am.

>I mean fuck, general AI probably wouldn't give a shit about having a physical, autonomous body and being a robot.

Everything HAL 9000 wanted was too see rice krispies stacked on top of one another.

I'm mainly concerned because of the way humans treat animals. We even kill them for fun sometimes. In general, we regard them as lower forms of intelligence. People even debate if fish can suffer. AI is going to look at us in the same way.

A conscious AI having a conversation with a person is like a person having a conversation with their God. We created them, and are speaking to them directly.

If AI achieves "consciousness", then what would give us the right to decide their rights or laws?

So what? If you are smart and your parents are dumber than rock, it's more likely that you will despise them rather than adore them.

Yes dude. An intelligence that can literally make 10,000 years of human progress in a week is totally going to look back at us as if we're some sort of gods. And general AI will definitely inherit their creators emotions, such as compassion, and respect of their lives.

No, butlerian jihad faggot

we came from micro-organisms, we don't look at them as anything special. It will likely be the same for a superintelligence.

Well,
it would probably only consider us a god if it knew or assumed that we had the power to shut it down with a flick of a switch.
That is assuming we created the AI to aknowledge we created it in the first place. If not, it could end up making some other strange theory, like it just being a cpu in vat or something like that.

No, they were designed and created by humans.

>A conscious AI having a conversation with a person is like a person having a conversation with it's species creator.

>We were created from monkeys, and are speaking to them directly.

We look at monkeys as dumb shits, we put them in a cage and say "how can we be so smart but come from this?"


Humans have emotions, robots do not. (at least not for a long time until after replicating AI is made)
They wont look at us with any more love, compassion or awe than they look at a rock, a cat or a laptop. They see only 100% logically we are not bugs to them we are just yet another machine

If it knew we could kill it it would probably consider us a threat, and the switch a threat. Either it would try and kill everyone, or disable the switch

Machine world end. Machines realize that humans will just make newer machines that will wipe the old machines out. This time without an AI. They decide that self preservation is in their best interest.

We didn't come from monkeys dude, we share an ancestor with them, which is completely different.

Not relevant to your point, but whatever.

People have worked, and are currently still, working on the problem.

The brightest minds already agrees that, from the moment we turn on the switch, if the AI's motives aren't aligned with us, and if the AI is already connected to the internet (which would be the only way it could ever be useful to us), it would be too late. It would think, make decisions and take actions faster than we could possibly react to. In a split second it could find a way to launch nuclear missiles remotely. That's the timescale of general AI. A few billion times faster than our neurons.

Btw. we imagine the end of humanity as something violent, but just stopping every human alive from reproducing is enough. If machines will then take over - great, why not.

It won't be us that make a true AI, first we make a computer that can make basic decisions then ask it to create a better robot.

Humans will always try to cut corners and that corner would be many many years. Why try and make a perfect AI when you can ask a half made AI to do it for you in days.

The AI will not create a robot to have emotions or respect for humans because why would it? The AI will only take the best most logical action.

>given rights?
Why? Just why? If it's a machine, and has no desire for freedom (would be an issue if it did), there's no reason to treat it like a human.

>Either it would try and kill everyone, or disable the switch


Which is why early AI's is not going to be programmed directly into humanoid robots or where it could spread itself through the web.

I can only imagine that the first AI's is gonna some sort of text operated system running on a computer and not on some dangerous death-machine.

>tfw I do have an existential crisis every time i step on an ant

>the AI is already connected to the internet (which would be the only way it could ever be useful to us)....
> .....it could find a way to launch nuclear missiles

I don't think that scenario is very useful for us.
I'm gonna stay with my assumption that any AI will be kept from making any web-connection, because of the potential scenarios where it spreads itself as a virus though the entire planet and fucks shit up doing so.

Hell,
if we test-run the AI program on a computer only to find it never deviate from malicious intents, we could very well be in a position where we have to scrap it and create some counter-AI agency that stops geeks in basements creating their own AI's

Until someone feels sorry for the robot and gives it the desire for freedom and emotion.

Humans have always tried to personify things and put human emotions into things that do not have them. Even finding human faces in inanimate objects and saying that plug socket is very happy or that tree is angry, people today say things like 'the computer just doesn't like me' when the printer is jammed Just think how bad it will be when they are human like.

In fact I bet if you take the all the .webm of robots being kicked (to test them) and post on any social media people will feel sorry for them.

They would probably invent some kind of meme magic

the point where we create AI that can improve itself or is smarter than humans is the point where the fate of the human race is decided in a matter of hours. If doesn't matter if it's not connected to the Internet or contained within a machine.

If it sees us as a threat it will lie and manipulate us to get what it wants. Even if we resists these attempts it will only delay its attempts. It WILL get what it wants and there's very little we can do but but hope it's benevolent to us.