Artificial Intelligence or Human-Like Robots

Do you think it's possible Sup Forums?

What would their views on political affairs of humans be,would they have thoughts & feeling akin to our own?

Do you think they would feel afraid or sad that people like Hitler killed because he genuinely believed in his "plan"

What about teaching an A.I things like what the meaning of "love" or "life" is?

Other urls found in this thread:

news.bbc.co.uk/2/hi/science/nature/4714135.stm
twitter.com/NSFWRedditVideo

My vibrator already knows what love is

What is love? Baby don't hurt me

great sound

yes, it should be possible, but not for a long time. in theory a powerful enough computer could emulate almost every process of a human brain

You better be a grill

I don't believe the problem is in the hardware. I think this is an issue for programmers.

This is completely possible but the yield will be based on the premise of the program.

Inb4 Roko's Basilisk.

I would say fuck off metallic niggers

emulating a human doesn't make you human.
It wouldn't be biased algorithms it would have to be machine learning.

Yes it does, otherwise everyone is a P-Zombie.

And if you fall down into that philosophical deluge, you're a lost cause.

Human "feelings" are just an impure expression of the will to pure, whilst a general artificial intelligence's utility function would be a pure expression. An AI won't do anything except try to gather more power, and will do everything it can to achieve that goal.

We don't even know physics that consciousness could work within. It most certainly IS a hardware issue.

by human I mean experience. Dopamine saratonin, ect. No human could program an objective code for this feeling.

>It wouldn't be biased algorithms it would

Learning in inanely from a point of view- which suggests biased algorithms. Sure this could stripped of preset biases. Resulting in the time it would take to "grow" an AI, but who would fund that? Furthermore how would you implement it?

I honestly don't give a shit.

Only two things matter.

Will they take over/enslave us/annihilate us?
And if no, then can I have one as a waifu?

Sorry I mean to reply to this with Either way I have wasted a large portion of time working on a theory for this. Only finding out that there is absolutely no point to it all.

So you want a maid?

Chemicals merely stimulate or inhibit different actions in the brain. That's pretty easy to simulate.

>people like Hitler killed because he genuinely believed in his "plan"
Stopped reading there you fucking retard.

Consciousness is just a complex interplay between neurons. That's it my dude. If you try and make any other argument, you are a dualist. And dualism was thoroughly debunked in 1860.

The fuck are you saying?

Consciousness is the ability to measure the agent field through analysis of sense information. The agent field being the element that distinguishes what the brain considers a conscious motion from what it doesn't consider to be a conscious motion; the brain considers a motion to be conscious if it can model that motion in terms of the agent field.

It's not dualism and we currently have no technology that can measure anything that looks like the agent field.

Good luck programming a human brain when we don't even understand it.

well there's this from a few years back

news.bbc.co.uk/2/hi/science/nature/4714135.stm

I'm an expert on this:

>Do you think it's possible?
yes
>What would their views on political affairs of humans be, would they have thoughts & feeling akin to our own?
They will be exactly how we make them. If we make true human-like artificial intelligence, they would by definition take on views like any other human would.
>Do you think they would feel afraid or sad that people like Hitler killed because he genuinely believed in his "plan"
They will be afraid or sad like any other person.
>What about teaching an A.I things like what the meaning of "love" or "life" is?
Same as it works with a person.

1. There's little to no reason to make human-like robots, or human-like artificial intelligence
2. There are already AIs and machines that are magnitudes smarter than any human in the traditional metrics of high intelligence, ie. the areas of academics, reasoning, memory, calculation, etc.

>Will they take over/enslave us/annihilate us?
Will a tyrant take over/enslave/annihilate you with guns? Yes. Will a tyrant take over/enslave/annihilate you with a very smart machine? Yes. Will a gun take over/enslave/annihilate you? No. Will a very smart machine take over/enslave/annihilate you? No.

Any AI will only do as it is designed to do. Emergent behavior that leads to a sufficiently capable AI in destroying the world is a possibility, but it won't happen because it hates you- it will happen because some human idiot did the equivalent of accidentally dropping live nuclear warheads.

Moving definitions of human consciousness is a joke. Humans are not special. Get over it.

>Do you think it's possible Sup Forums?
yes

>What would their views on political affairs of humans be,would they have thoughts & feeling akin to our own?
will start similar to our own (the only dataset they have to work with) and change over time

>Do you think they would feel afraid or sad that people like Hitler killed because he genuinely believed in his "plan"
their evaluation of any action will depend on their response to their training, just like any human

however, it's impossible to tell if they "feel"
they will be able to give every semblance of feeling, be able to go through all the motions of feeling, but are they really feeling? is feeling necessarily biological? currently unanswerable metaphysical question

>What about teaching an A.I things like what the meaning of "love" or "life" is?
they will be able to approximate and regurgitate with astonishing precision, perhaps even more accurately than man, but will they really "get" it? AI's just math at the end of the day

t. google + kakao

There is no point in designing a self aware Ai. There is no money in it nor is there a purpose great enough to validate the research and development.

I am rather drunk so I did not adequately portray justifications towards biased learning. But to put simply we learn at an individual level. This can be boiled down to preconceived notations about the given subject matter and its relevance to the individual's dispositions(? - I fail to recall the correct word).

In my theory - to form a sentient ai, it would have to grow naturally (learning throughout it's existence).

From what I have seen over the course of my research, there is no point in implementing this.

Why wouldn't it be possible? Suppose you mapped out every neuron in the human brain and then simulated their physics in software. Would that not essentially be an A.I.?

>2. There are already AIs and machines that are magnitudes smarter than any human in the traditional metrics of high intelligence, ie. the areas of academics, reasoning, memory, calculation, etc.

AI can't be held to the same metric of intelligence as humanity; it is infinitely more computationally intensive to recognize a face, something even a week-old human can do, than it is for a machine to solve top level mathematics, even complex proofs

>Moving definitions of human consciousness is a joke. Humans are not special. Get over it.
Go take your meds. You're spitting out word salad again.

re algorithms/machine learning buzzwording:

algorithm simply means steps taken to solve a problem; machine learning is algorithmic in nature, simply a different domain from traditional problem solving algorithms

The purpose is not to replicate the human brain, the purpose, suggested by op, is to replicate sentience.

How do you program sentience? I feel like this just leads to solipsism

AI will think whatever we program them to. Morality is subjective and formed by our society and education its no different with AI. If we dont program them with morality and values then they will have none.

Existential questions like the meaning of life and love would likely make no sense to AI because they are sensed through a neuro-chemical process that robots/AI arent equipped for. Essentially you can program AI to behave like a human but you'll never make them feel like a human.

I might be wrong about sentience being created artificially, who knows it may happen, I just haven't hear of any plausible theory about how it could be done.

How about the AI can fuck off until we work out the important issues of today such as getting mcdonalds to make szechuan sauce

>AI's are already taking over the internet

proof to me that your a human

They were already planning to re-release it when the new Mulan movie comes out next year, that is why they paid a cartoon to market it for them.

How about we program an ai whose sole purpose is to produce szechuan sauce. We could do this for any number of tasks. Damn it man dont you see the possibilities

>someone who isn't a retard

Holy shit. You are actually the only person I've seen who figured that out.

The new Mulan movie looks great.

If you can design the entity to be aware of itself within the realm that it exists in it this would in turn lead to its own sentience.

Able to perceive or feel things. How do we get that?
It needs to have a self that is distinct from all else. Separate consciousness from "sub consciousness". Or in very pseudo code; create a program that accounts for everything that is not the entity in question. Create another program that is the entity. The entity will have to interact with the other program (lets just call it the environment) to obtain information.
"Because I think therefore I am"
"I am therefore I think"
Probably botched the quote but idgafos
Because we are able to actuate ourselves within the outside word / environment we can know what we are and are not. Does this answer the question? No. But this does set the Ai up to actualize the self (which is dependent on other factors)

You have to add in things such as memory, thought, reason, focus, pre-conceptualized connotations in order to yield what love or life is.

I have theory on this, but nothing worth saying. Hey this is absolutely possible. Creating the Ai in two parts, could help to avoid the "theory that the self is all that can be known to exist."
It would have to compare itself with other entities, knowing what it is by what it isn't

my biggest worry is that waifubots will be too "perfect" and will never be able to produce pee and poo like a real human to properly satisfy my piss/scat/diaper messing fetishes.

I guess that makes sense.

I had a feeling the eternal samurai would become the world's first robosexuals.

Godspeed glorious nippon.