Is true artificial intelligence actually possible? It seems like a bit of a meme

Is true artificial intelligence actually possible? It seems like a bit of a meme.

Other urls found in this thread:

m.digitaljournal.com/tech-and-science/technology/a-step-closer-to-skynet-ai-invents-a-language-humans-can-t-read/article/498142
en.wikipedia.org/wiki/Artificial_brain
twitter.com/SFWRedditGifs

Do you believe that your brain needs magic in addition to physical matter to work?

Most speculations usually involve developing some kind of AI that will excel at developing a new AI etc. and it will exponentially get smarter like that. Not by humans creating a true general AI. There's no reason a true general AI should be human-like.
With the chance of sounding like a positivist (as in philosophical positivism); human brains work by electrical impulses. So given enough time, why not?
As for human-like robot movement, when I asked a man that works as a biology teacher he told me it'd be incredibly difficult to engineer the fine tuned adjustments that neurons transmit to our muscles.

Also don't know if bodies transmit signals by other means than electrical. But my point is it is physical phenomena.

>Is true artificial intelligence actually possible?
We don't know what "true intelligence" is yet.

Well first you'd have to clearly define what sentience, consciousness, intelligence and self-awareness are and "where" they begin.

darkness has fat tits

lalatina a shit

who cares, it would hide to avoid doing all the boring shit people want to make it do

It's done, if anything by me.

It's possible and western governments probably already have it. I mean we've had fucking machine learning models to accurately voice tag people across any device with a microphone since the turn of the century apparently, so I wouldn't be surprised if we have already have AI, considering you could in theory brute force the shit with genetic algorithms given enough time/resources.

It's only a meme if you (still) belive that humans are somehow special.

Would building the tech and then have the robot teach itself/write its own code for controlling the tech be more feasible?

If we could create machines that could get affected by environmental factors in a way that humans do, then yeah sure.

Big Boobies :D

What's the worst that could happen?

GIVE TATAS!

Those toddies could drown a man.

Any sufficiently advanced technology is indistinguishable from magic.

This, desu.

I'm not sure what that phrase means in this context. The tech required to match the brain is magic tier, thus we shouldn't believe that we can do it?

>It's possible and western governments probably already have it.
do you believe they have UFO tech as well?

>we've had fucking machine learning models to accurately voice tag people across any device with a microphone
Source?
But even if so, voice recognition is a very basic thing for humans. Dogs and very small kids can do it. It's just one of our many background processes.

sauce on anime op

Darkness from Konosuba, which is by far bestgirl.

This

Also. I want to bully Lalatina's lalatinas.

mathematicians and programmers trying to solve the problem of human consciousness is like blind people trying to describe color

How so? Consciousness is just an algorithm.

fuck off we don't need the humanities in CS. Maybe we need more psychologists, but that would be it.

>consciousness is based on (rational) logic
[citation needed]

that's the thing, consciousness is not only a CS problem. i quit studying a.i. at university in favor of cognitive science. i'm interested in recreating consciousness, current a.i. research is headed towards simulated consciousness rather than a recreation of it. big difference.

a simulation doesn't have to be perfect, it only needs to be good enough to fool the observer. but i want to know what consciousness is and how exactly it works. i highly doubt you can explain it with CS alone (see pic).

More importantly why the fuck do we want concious AI. We want subserviance from machines

Imagine yourself on a boat. I want you to that for me user. Imagine yourself standing up like a complete retard and falling down the sea. For you to then after minutes of struggling to swim be thrown into the shore somewhere. There you see a big mirror, unlike anything you have seen. There you look at yourself in a way like never before. You begin having thoughts of you being where you, that you are a human being and you are your own person.
That is what consciousness is user, to realize your own existence in this endless universe.

To make AI effective, it needs to be self-sufficient and solve problems on it's own.
And the more self-sufficient you make it, the more conscious it gets.
Consciousness is not isolated thing that can be programmed or turned on/off. It is emergent property of system that can solve complex problems, learn and introspect.

Instrumental convergence
(ie. a dumb superpowerful AI destroys the world)

Computer AIs don't think like humans do, all you'll get is a jumbled mess that's logical for the AI but not us, just like that language translating AI that created its own language that the researchers shut down


m.digitaljournal.com/tech-and-science/technology/a-step-closer-to-skynet-ai-invents-a-language-humans-can-t-read/article/498142

for research purposes. conscious a.i. could be a powerful tool for psychiatry and psychology.
but i agree to an extent, i wouldn't want my computer 'feel', i just want it to 'work'

>Computer AIs don't think like humans do
>posts article about language

And it is bullshit. If AI will be able to think, it will be able to explain itself. If it won't be able to explain itself, it is not intelligent.
Every humans builds it's own, unique, individual mental model of the world. Why would you expect AI to make same mental model as AI.

It is explaining itself, just not in terms we inherently understand. Computer AI doesn't function in the sensory environment humans do and will never think or act like we do.

Bullshit.
If AI will need to work with humans, it will learn to effectively communicate with them.

> will never think or act like we do
Humans are actually pretty shit at this whole "thinking and acting" thing. We are full of biases given to us by evolution and we act more on emotion than on logic and reason.
So I don't see why AI acting different is wrong. Especially if it is not fucked up like humans.

The end goal here is programmable peers. VR characters in interactive fiction and games, who can be assigned to designated roles and act believably. Or to go for a more Sup Forums oriented angle, waifubots who can be programmed to love you, even though you're a terrible person, and still seem like a real girl. It's really weird to say that this would be undesirable, it's basically one of the holy grails of technology.

>implying conscience is a entirely physical phenomenon

No, I'm not religious.

>not fucked up by humans

If ever AI gets conciousness and has a look at the garbage we put on the internet, it's going to fucking kill itself

You contradict yourself

>The end goal here is programmable peers.
i.e. simulated consciousness. this is where a.i. research is currently headed, but this is not what i'm looking for.
for me the end goal is a total recreation of the human brain and consciousness with all its quirks, i.e. creating something that's truly 'alive', 'feeling' and 'thinking'.
en.wikipedia.org/wiki/Artificial_brain
if that's even possible, i doubt i'll still be breathing when that happens. simulated consciousness will definitely be achieved long before that. also, simulated consciousness seems to be way more useful (like in the examples you've posted), so it's probably best people focus on that first.

> It seems like a bit of a meme.
because it is.
Why do you think the only people shilling it are people who have never done AI resarch or programmed an AI? No serious Computer Scientist worth their salt believes that a deterministic turning machine running on a binary counting system could even compare to whatever it is about the human brain that makes us concious or intelligent. I would never call my clock intelligent, but AI shills would.

AI has proved to be able to program itself, user. Are you telling me that isn't a thread. Seriously we are nearing a barrier which shouldn't be crossed.

Creating AI is nothing more than intellectuals turning biological reproduction and child rearing into an engineering problem

What is intelligence exactly?

so you're saying the human brain and consciousness runs on 'magic' (or as you put it 'whatever') rather than 'physics'.
lmao

true

I want a waifubot that's programmed to love me. But I'm scared that anything realistic won't be able to...

>Is true artificial intelligence actually possible?
define 'intelligence'

>these are the retarded unrelated platitudes that your average Sup Forums user believes make a good point

I believe it remains magic until observed and actualized

>Is true artificial intelligence actually possible?
>kids eating tide pods
I think we have bigger problems right now

The real question is perhaps if it's wise for humans to invent their own replacement.

This. It's basically glorified statistics. I don't know where people get the idea that these things are anything close to conscious.
Link? Besides adjusting its own parameters, the most I've seen are scripts that are used to run through models iteratively with different hyper parameters hoping that one of them will yield better error rates for the same fixed problem. That isn't AI. You can do the same thing with linear models.

how do we know those incredibly complex algorithms arent already sentient to a degree?

Consciousness/qualia comes from the dynamic inconsistency of the brain; all man made computation is static, a machine goes through pre-mapped procedures, the mind charters the unknown.

Imagine a "Lego set", the mind puts the hundreds of pieces in various random orders until the the right shape is formed. A machine goes through the same steps every-time until the right shape is found.
>123456789 (machine)
>1359 (mind)
There's no chance or luck or guess in a machine.

You're not talking about chance, you're talking about complexity. Minds aren't random, they're just too complicated for you to predict what they're going to do. There's nothing special (or magic) about this complexity, we just don't have enough processing power to attain it. By many orders of magnitude still. But not THAT many.

MOMMY MILKY