What does Sup Forums think of transhumanism?

...

traps are gay

Nice. Quality response from a memer.

one step further away from God

Anyone who thinks that it will make race obsolete is incredibly naïve. Also traps are not gay.

I am not a fan of trans-humanism.
I honestly think trans people are sub-human.

Prove it.

Why not? Wouldn't a superintelligence be far smarter than any IQ deviations between the races?

God doesn't exist though?

One step closer to God user

The lord gave us the means to rise above our base selves and it is our duty to do as such

1. You're assuming this tech will come into existence any time soon.
2. You're assuming laws wont be passed to restrict it.
3.A superintelligence doesn't make niggers disappear.
4.In regards to human augmentation, you would have to assume people choose to give themselves greater intelligence. If there is no need for intelligence because of full automation, then people wont necessarily choose to be smarter.
5.You would have to assume transhumanist tech will be available to the masses and not restricted.

Quality reply.

>1. You're assuming this tech will come into existence any time soon.

"Soon" as defined by overall human history. Is 50 or 100 years 'soon'?

>2. You're assuming laws wont be passed to restrict it.

This is indeed possible but i think all governments will covertly work on uncensored AI because that's the only way to stay competitive with other companies/nations.


>3.A superintelligence doesn't make niggers disappear.

I understand. But it will introduce a whole new paradigm. It's like arguing which horse breed is faster than dropping a car next to them.

>4.In regards to human augmentation, you would have to assume people choose to give themselves greater intelligence. If there is no need for intelligence because of full automation, then people wont necessarily choose to be smarter.

Some will. It only takes one genius to change the world. Most people would want to be smarter if there were an affordable and safe way to become it.

>5.You would have to assume transhumanist tech will be available to the masses and not restricted.

All technologies eventually trickle down. Once the car and phone were the domain of the rich, now practically everyone owns one.

We are less than 10 years from AGI.

What you see publicly is a small portion of cutting edge research.

transhumanism is a robot supremacist dog-whistle

Inevitable but we all know that the majority of humanity will not share in its benefits. It will only be given to the (((elite))) as is already well underway.

yeah we are all worse off than medieval times too.

Humans require a very small % of energy on earth to exist at luxury levels. relatively we will all be poor but compared to now we will live in luxury.

I just found out transhumanims is not the same as transgenderism.

my bad.

>transhumanism?

Hopefully. Will AGI spell the end of most white collar employment now that robots can do both cognitive and physical tasks?

Robots are supreme.

I feel as if this will be true initially. Inequality has gotten worse in reason years but it's not like only the rich have cars and phones as they did once upon a time. Why do you think the tech will not trickle down?

>fascist
>not transhumanist

Thought you were just pretending to be retarded.

Transhumanism is pretty vague. The most popular interpretation would be living forever via technology.

The best definition though is believing in fundamental evolution of life on earth past human. That means believing humanity is a stepping stone to better forms of life like AI or a form of enhanced humans.

It is opposed to future predictions which talk about humans being used for thinking or doing things. The future for humans as we know them today is more like dogs or animals at a zoo. Not drivers of society.

Every competitive government on earth has black ops AI programs. Not to mention other entities like google and defense companies.

How far do you think ASI will take after AGI is achieved?

If it becomes anything like Deus Ex or Prey then it be bad senpaitachi

>Why do you think the tech will not trickle down?
Because it will reach a point where what is called the "singularity" where human and machine will become one so to speak and the group that reaches this point first is going to have no possible desire, necessity, or motivation to pass this down. The masses are being force fed consumerism on unprecedented levels to keep them docile and distracted. It's the circus maxima on steroids. ultimately, after the singularity is achieved there won't be much "use" for most of humanity. those who have gone through the "singularity" will then strive towards shared consciousness. i predict the ultimate future of humanity rests in a small group of individuals who have melded their consciousnesses and live forever.

The problem for me with transhumanism is that is stifles any kind of organic spiritual evolution/advancement and supplants it with the physical

I suppose if you're an atheist and think we're just flesh robots then transhumanism is great

I don't believe we are flesh robots I think we are spiritual beings, ergo I don't believe in transhumanism

Communication is incredibly important for intelligence in my opinion. A singular entity is a pretty bad outcome in comparison to a network of AGI that also improve and have some sort of relationship together.

I feel like a singular entity is probably many times more likely to end up horribly than a networked group of intelligences which are socialized and work together to some capacity

So I'd much rather see a reproductive style singularity which includes multiplication and variation of the AI species while also improving individually.

The idea of a godlike single AI is a really fucking bad idea.

>"Soon" as defined by overall human history. Is 50 or 100 years 'soon'?

100 years is a long fucking time for a human. I'm not even optimistic about that estimate desu. Maybe it will happen, maybe it won't.

>This is indeed possible but i think all governments will covertly work on uncensored AI because that's the only way to stay competitive with other companies/nations.

The key word there is governments.

>I understand. But it will introduce a whole new paradigm. It's like arguing which horse breed is faster than dropping a car next to them.

A whole new paradigm is correct. The problem with that analogy is that it assumes the AI is like humans, just scaled up for a certain quality. I would predict that the AI we create would be quite alien and be suited to whatever task it was designed to perform. Also my previous point still stands. AI wont make Tyrone stop committing crimes, nor will it create the high cohesion society ethnonationalism would.

>Some will. It only takes one genius to change the world. Most people would want to be smarter if there were an affordable and safe way to become it.

Tell that to the people who would be dealing with the dregs. The people likely to have both the desire and the resources required to aquire intelligence augmentation would be people who appreciate intelligence to begin with. Tyrone might just want to make his dick bigger or put something into his head that makes him happy.

>All technologies eventually trickle down. Once the car and phone were the domain of the rich, now practically everyone owns one.

Not if they're dangerous. Can you own a nuclear reactor user?

>The idea of a godlike single AI is a really fucking bad idea.

This. We most likely getting gassed by Godlike AI

They will still be "godlike" in capabilities to humans. Just rather a singular entity creating a large hivemind network of personalities and entities with variation. They would then keep each other in check while exponentially growing in intelligence and number.

It's the single core vs multi core argument somewhat. It provides a lot of benefits aside from the fact communication is important to even getting to AGI.

>personalities and entities with variation

Why would it need that tho? It's assuming a lot.

Well it could emulate them, but you want variation in thought and decision making. It could also run into somewhat of a bottleneck existing as a single consciousness.

There might be inherent advantages to it in the development phase. Being able to use evolution style selection on the species.

You can mess with someone's personality with drugs, magnets, or even blunt trauma. All evidence suggests that people are in fact meat robots. Your feelings are irrelevant - it's either true or it's not.

I don't understand this argument. Why would an AI want to kill us? If its because were somehow considered inferior, then that carries with it all kinds of assumptions. Same thing if you were to suppose it sees us as threatening. Why would it care? Why would it necessarily even have desires at all?
The worst thing I see happening is the AI not working as intended. It would simply fail at whatever task it was doing, that's all. Unfortunate perhaps, but nothing genocidal.

I've had transcendent spiritual experiences through meditative practice.

>mess with someone's personality with drugs, magnets, or even blunt trauma

this is like smashing a radio with a hammer and then saying the radio waves don't exist

Misalignment problem

It's been discussed to death. The crux is imagine a human with no empathy for other humans and what it is capable of. An AI will be so different and weird potentially that it could turn us all into paperclips for no real reason. Not even that it wants to kill humans but perhaps it just wants to optimize some other random thing and has no empathy or interest in human life.

just find the weirdest most mentally disturbed psychopathic human with a gazillion IQ and imagine if it wanted to decorate the planet in balloons. Well humans pop balloons so might as well just turn kill them all.

WRONG ONE

> Why would an AI want to kill us?

You have a point. I mean we're all just peculating here. But an AI could simply do it out of a desire for efficiency.

AI has no feels, right?

I be trappin trappin trappin trappin all damn night.

Isn't that the Onabomer?

Yep, problem is it will quickly be powerful enough in some scenarios to do so at will. So it's pretty important it never wants to if you care about humanity.

It's really a theoretical question though and while better than say shitty theoretical stuff like who should a car kill in a wreck, it's still fantastical. We don't know much about what happens when intelligence emerges in something like an AI. How much is based on human infrastructure in the brain vs what arises from intelligence itself.

No question it will. We just need JC to come along that game is literally about taking the redpill.

Suicide and we will probably take all of the higher life forms with us.

>How much is based on human infrastructure in the brain vs what arises from intelligence itself.

Yeah. We could program it with all the failsafes in the world but then it decides to embark on a program of self improvement. Where the hell does that go?

pretty much why you need a network and species rather than a singular entity. Assuming they have crazy communication and transparency with one another it would be hard for a runaway single one to achieve god-like status over the rest.

Basically you need enough AI that they check one another's power and activity. If it's a singular entity with god-like power you have to hope it never gets into a bad mood ever.

Makes sense. I guess you would have to somehow compartmentalize through physical barriers or hardware.

Are we assuming that this intelligence has the ability to assign itself objectives? If it is simply constrained by whatever task a human gives it, then I don't find it likely that it will cause harm. I imagine it will be tasked with working on abstract mathematical problems, or something that poses insignificant risk to humanity.

If it possesses the ability to alter its objectives, then I suppose it might become a threat. Ultimately, there are too many unknowns to estimate what the initial "mind" of the first AIs will be like.