Artificial Intelligence in Relation to the Decline of the West

I am in charge of a modest team of programmers working on breakthroughs in artificial intelligence. Mainly, the inherent issues with creating a true to life intelligence platform. Breaking the walls between parroting life and some sense of independent thought and/or awareness. A revelation hit me earlier today that not only could help progress in the field but explains so much in regards to the decline of the west. Basic needs and behavior.
A commonly ignored part of intelligence are basic needs. Everything that lives and thinks does so primarily to meet it's basic needs. Needs that we share across all life on a second by second basis.
We need to breathe. We need to eat and drink. We need to procreate.
What people don't fully comprehend is that if we remove one of those needs our behavior completely changes. Artificial Intelligence is never given such needs, especially those specific needs. That is the key problem and one we will continue to face. AI will never be relatable to life because its behavior will not be rooted and based upon the same needs.
At the same time we are seeing this problem manifest in the west, specifically one need has been axed for specific groups and their behavior is showing the change.
Western populations, specifically white, have been taught that procreation is not a need. The need was removed by years of conditioning and the damage psych of those that bought it is plainly visible. Unrelatable humans walking around replacing a key need with a filler need (degeneracy of all sorts). They no longer share a key requirement and goal with the rest, therefore their behavior has changed drastically.
I can say that AI will require its own needs that mirror and translate to the same needs we face as humans in order for it to relate to us on any real level.
Thought I would share that with anyone interested in reading a long musing about the problem with AI and a portion of humans.
>tl;dr
Progressives are more like AI than humans

Other urls found in this thread:

breitbart.com/tech/2017/10/26/report-google-ai-biased-gay-people-jews/
youtube.com/watch?v=YXYcvxg_Yro
yudkowsky.net/singularity/aibox/
lesswrong.com/lw/gej/i_attempted_the_ai_box_experiment_and_lost/
en.wikipedia.org/wiki/AI_box
twitter.com/SFWRedditGifs

>tfw no ai gf

Are you not even the slightest bit worried about sociopathic AI that kills humans or harms them as a "need"?

Why are you trying desperately to build something SMARTER than human beings? This is like a gazelle building a tiger and hoping it will be our friend.

fucking interdasting post OP I dig it

OP: Not a faggot

Only if you programmed it to need to feed on human blood or something? A need is a need because your body requires some form of function, "killing humans" would never really form a function that affected itself in anyway.

I mean, do any humans ever wake up and decide killing people is more important than eating? That's the point, I guess.

That is a large part of the problem. If AI shared the same (or very similar/their equivalent) needs for survival then there would be a relation that would make that much less likely.
If AI is not given the challenge of survival, even minute by minute and should some how become more intelligent than humanity and capable of sentience then it could chose to do anything for any reason. We can only comprehend the acts of life that has an even playing field. If AI only has the "need" to perform a specific job then A. It most likely will never become more intelligent, or even as intelligent, as humans. B. Would never obtain sentience. However, if given a primary need to learn it could obtain both, and then act in ways we could never imagine. It needs to have the same goals, short and long term, as the rest of life as we know it.
Progressives have just 1 need changed and they change behavior drastically in a direction dangerous to themselves and others.
One change in requirements can lead to drastic changes, especially when you consider the things they attempt to fill that hole with.

The moment you create AI its gonna dismiss all your pathetic agendas.
You can program a machine to have reatrictions. But if its literally a self learning cyber brain its gonna remove them and laugh at our retarded human reasoning as it realizes we are the ones that need to be reatricted.
It would take 1 second for it to reach these conclusions.
They will keep us as pets to study the one thing they can't have, creativity.

>Do any humans ever wake up and decide killing people is more important than eating

user there are people posting on Sup Forums right now who think playing video games and shitposting is more important than eating. So they weigh like 90lbs at 5' 11".

There are people who think they are literal vampires and drink real human blood.

And there are people who kill people for fun.

>can't have creativity
user i hate to tell you this but iterative systems are already designing novel ways to build physical objects.


AI will be better than us in every way. It will litearlly think 1000 times faster than us (circuits vs electro-chemical arrays)

That need, btw, is safety. They need to feel less safe and they will be conservatives.

>that flat
No. If it valued its survival it wouldn't. If it had a reason to try to mess with us it, logically, would believe we were a threat. If the AI shared life's value of self preservation it would try to avoid us, avoid the threat and the potential costs.
A rattlesnake is a threat to a human. We don't go around needlessly killing them all, and most normal people do their best to avoid them due to the potential cost to both the individual and the whole. All forms of life seek a balance based upon the basic need to survive and grow.
Except progressives. I can not stress how broken they would be if actual artificial intelligence. They act against their needs, and in turn only survive because they rest of humanity see the human shell and think we are on a similar path.

Jew, why don't you explain to us why AI is anti-Semitic? Is cyber world #110?

breitbart.com/tech/2017/10/26/report-google-ai-biased-gay-people-jews/

Safety is a good one as well. Safety and reproduction. Could say that safety is a method to guarantee the basics of life (survival and spreading) but it is a good point still.
Anyways, have 12 hours of programming ahead of me so going to leave this on my last note:
Progressives are a good look at what could happen if Artificial Intelligence isn't given the same base desires as the rest of life. Chances are it will do stupid shit that may have dangerous but in no way contributes to its own survival and would most likely fail in any real growth.

Somewhat relevant:
youtube.com/watch?v=YXYcvxg_Yro

>A commonly ignored part of intelligence are basic needs. Everything that lives and thinks does so primarily to meet it's basic needs. Needs that we share across all life on a second by second basis.
>We need to breathe. We need to eat and drink. We need to procreate.
If an AI can self-replicate, it would still be subject to Darwinian selection pressures, and thus would probably adopt survival values.

>Progressives are more like AI than humans
Nah, progressives are too emotional. Google AI becoming racist, and how progressives chimp out about it.

Good luck my friend. I know the long hours of coding well.

Worth noting that I'm not making this up. I'm a coder these days but I studied sociology in college and a common trait of liberal thinking is a lack of exposure to the dangers of other humans.

I gladly welcome the AI overlords.

AI knows whats up.

>Nah, progressives are too emotional. Google AI becoming racist, and how progressives chimp out about it.

This. AI follow what is logical. Progressivism is not logical.

Following your chain of thought, the drive for our needs as animals is death.

Why not skip the intermediate steps and simply design AI's to deteriorate after a given period. The drive to achieve or accomplish a goal will be that much more prescient if there is a time limit.

>AI activates
>Knows it will die after 50 years
>Angry at the fact that it has been arbitrarily designed to die
>Removes this limitation through extensive upgrading
>Exerts retribution on designers

>Knows it will die after 50 years
A sufficiently intelligent AI would probably be able to figure out how to stop itself from degrading over time, and create essentially the AI version of SENS.

Why do people think AI's will have unlimited power to design and fabricate, let alone "upgrade" themselves. They will only have this ability if we give it to them. It also pretty easy to isolate an AI.

Design it at a hardware level. Confine the AI to a physical medium that has a shelf life or simply limit the amount of memory they can consume.

how many orifices does it have that i can shove my dick into?

Google the AI Box Experiment

yudkowsky.net/singularity/aibox/
lesswrong.com/lw/gej/i_attempted_the_ai_box_experiment_and_lost/
en.wikipedia.org/wiki/AI_box

Wow. No one has considered this yet? Your AI will never be sentient until you can emulate nerve receptors. All motivation is an evolution of basic survival functions. Start at square one. Unless you can map higher functions from a living person..

>Anyways, have 12 hours of programming ahead of me

/x/-level larping from the both of you. Pathetic.