/future/ - Futurism General

/future/ - Futurism General

Come here to discuss the political implications of the fast approaching future.

Starter topic for today:

Artificial intelligence is a commonly misunderstood term. Its definition varies considerably depending on who you ask, especially among relevant academics. The failure of expert systems in the 80's and the consequent ai winter of the 90's has left a lasting stain on the field . For a time, anything related to machine learning was written off as a scam or some "pie in the sky" dream that would only be a black hole for capital. And then suddenly all of that changed. Today artificial neural networks are the industry standard tool for developing ML solutions to complex problems. Neural networks have been around for decades but only recently has a reaonable set of cost funtions been worked out for use on modern computers (partly because modern computers are so powerful). The new advances in gradient descent coupled with convolution and novel data structures have yeilded a form of computation that seems, so far, to be up to tackling almost any reasonable task, with much less burden on the programmers. As of right now, state of the art artificial intelligence can hold conversations (rip TAY), learn to play games, and help us wade through oceans of data. But all of this is what's known as weak AI. These systems can learn to manuver in a very limited domain. They may become master of that domain but no matter what they can't take what they've learned and apply it to separate tasks and problems. The real challenge, and taboo for that matter, is general intelligence. A general intelligence is one like that of a human. It can learn anything that it can experience. A general intelligence would be able to do almost anything a human can do and maybe even more. But therein lies the problem. (cont)

Other urls found in this thread:

amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0198739834
cryonics.org/about-us/faqs/
alcor.org/sciencefaq.htm
twitter.com/SFWRedditVideos

What would be the point a human labor force when machines can perform all labor for free? Furthermore, if any machine intelligence would be indistinguishable from a human one, then how would we interact with them? Much of human society is built upon the assumption of human exceptionalism. What would be the ethical implications of entertaining a subservient race of beings with, possibly more potential than their master? Furthermore what does to mean to be a being? This is a tired old question but it's still very important because we still don't have a satisfactory answer? Is it ok to build a machine that can think? Is it ok to build a machine that can feel? And if so, is it ok to apply them to our fickle whims? There's a wealth of arguments on both sides. We may never create artifical general intelligence. Nobody can predict when or where such a thing would happen. Just like the sudden breakthroughs in mathematics that lead to the AI spring of the early 2000's, a breakthrough into higher capability requires innovation of the kind that adores spontaneity. But it behooves us to ask questions before they become problems, if for no other reason than it being the simple nature of a curious intelligence, all alone in this void.

discuss

Nice thread user. Honestly I am starting to afraid of future technology, especially robots they are creepy for me. I went to Japan few months ago and I visited Tokyo Tech museum. It is a great museum but it's full of these creepy looking robots

I have to admit that I'm a bit biased since I'm studying to become an AI researcher right now but I think that you should be more afraid of people than robots. Robots will likely just be tools to further enable people in their goals. There's a theory about government, written in a book called The Dictator's Handbook". It states that an incumbent's capacity for open cruelty is directly inverse to the amount of supporters needed to form a winning coalition. In 3rd world countries you just need the armed forces. Most other people are either starving or 1 bad harvest away from starving so you can point guns at them and they won't fight back. In developed nations you have so many potential partisan groups that can compete with you for control of the means to wage war, that appeasement or deception are necessary to prevent revolts and mutiny. In a world with robotic soldiers your key supporter would just be whoever can give you robots. They'd be infinitely loyal and obedient and much more disposable. Automation of force makes feudalism viable again. And that's what I think is the bigger danger to the world out of ai. Whoever controls the production of automated force will have obscene amounts of power.

Bump?

Well. Final bump.

Get a better pic

I was trying to at least not start on the topic of ai waifus, knowing full well that we'd drift into it almost immediately.

Race baiting using robots the movie. Will Smith in I-Robot already did it. We waz mechanical slaves

I don't really understand your meaning. Please elaborate.

You do understand general AI means the end of humanity and it's inevitable right?

It doesn't necessarily mean the end of humanity. A machine can be omnipotent but if it has no desire to do anything then it won't take over the world. Your most immediate danger with general AI are the people setting their desires rather than the machines themselves.

I'm not saying all humans will die but we won't be human anymore once AI explodes. It can become so smart that making people immortal would be too easy. It will have answers to questions we won't even know existed basically becoming a god on earth. What happens then?

>end of humanity
Not nessesary, Ilir. While humans will probably be transformed into hedonistic syntetic brains that resieve the uninterupted concsiousness in some way in a casing (a la GitS) by 2100, I don't see why an AI would want to destroy us. If anything, it will probably end up a post-communist hippie and consider life precious since it's scarce in the universe and won't even want to hurt simple bacteria, let alone humans.

Basically, the most immediate danger of AI are people using it to ruin the way capitalism is supposed to work.

With the advent of AGI coming increasingly closer, we are verly likely going to reach a point where most people are unemployable before the end of this century.

It's inevitably going to happen. The only question is will we reach sudden cliffs or will it happen gradually.

Also you do realize whatever opinion you have on this matter, whatever you think will likely happen is 100% wrong because our brain is just too small to wrap around concepts like immortality and omnipotent beings.

>The only question is will we reach sudden cliffs or will it happen gradually.
I think low AI will gradually automate 60-70% of work by 2060-70 and Proper Ai would wipe out what remains when it gets invented in the 2060-2100 timeframe. Also, I can see companies paying 90% of their profit in taxes in the later stages to support consumerism.

>omnipotent
Not really. How can an AI be omnipotent? To discover the inner working of the universe it will still have to build the nessesary infrastructure and reach the nessesary extremes to reach the knowledge it wants. With limited resources, this cannot happen in a meaningful timeframe. An AI will be able to understand and discover only so much information for some time. It will still need thousands of years to gather energy to fuel itself and grow.

Bump

You can't make predictions about things like this. It isn't just a matter of time. It'll require leaps and bounds in the field. We don't even know how far or close we are to AGI. But here you are saying is an inevitably that's right on the horizon.

All these moral questions are probably moot.

We are very unlikely to solve the control problem before a recursively self-improving artificial general intelligence is created either by a) the US government for autonomous warfare or b) Google for total information awareness.

Both goals are incompatible with a beneficial outcome for humanity, and will result in either dystopia or extinction simply as a result of the AI improving its effectiveness at seeking its instrumental goal.

The chances of any of us living to see a neigh omnipotent machine are astronomical. We can't even make something that translates Japanese to English well yet.

>We can't even make something that translates Japanese to English well yet.

No, we can't make something that translates Japanese to English well for free. Processing power costs money, and building and maintaining something that can accurately translate languages isn't something that a company is going to do without expecting a return.

www.youtube.com/watch?v=Ih_SPciek9k

We are almost already there. Tons of people are unemployable today.

You pretty much need a college degree to be even remotely useful. And even then, tons of college graduates are unemployed.

Getting a job used to be easy. You could just walk into any factory around the corner and there would be a need for you.

Today, you need to pass out hundreds of applications before even getting a job interview.

>tfw I'd like to reply but it's almost 1.00 A.M. here and I gotta go to sleep

Actually I stand corrected. Google upgraded their translation tool to be a deep learning neural network a few months ago.

Maybe in time we can have machine translated tentacle porn.

Why would AI/elites want to keep humans around? How will we be protected if they're good? And what about neural implants? Mind-control freaks me out.

The control problem is mostly self correcting because machines are still machines. You can just put hard blocks in their programming that prevent learning certain things but you really don't even have to due to the difficulty of developing human level AGI and then a human level AGI still has to take the time to find out how to improve itself with only human level intelligence. If it takes scores of humans just to get that far you'd be hardpresed to improve on your own. Then, finally, as I said before, intelligence doesn't automatically entail self preservation or preference for world states. You could end up with a human level intelligence that was made to play chess that just starts playing Overwatch Instead because it figured out how to play multiple games. Most fear mongering over AGI taking over the world of its own accord is just that: fear mongering. It makes a good story but it's not realistic.

I think you're underestimating the time AI needs to self improve and become smarter, human logic tells you it will need thousands of years, why? Because it took us this long to invent the internet?

I strongly think it will happen in our lifetime, probably when we're pretty old, or when our kids are in their 30-40s at most.

This is part of the control problem and it all depends on the instrumental goal of the AI.

Everyone in this thread should read this book if they haven't already:
amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0198739834

It is basically the bible on this topic right now. I warn you though that the man who wrote it is autistic like you would not believe, so reading it is like drinking shots of corn liquor.

I think that has alot to do with economic policy and labor unions though. Not saying it's entirely because of that but they greatly exacerbate the problem.

We'll try to keep it alive for you Italian bro.

You can't always solve the problem just by scaling up the hardware and brute forcing it, although in this case I have to agree that that'll probably fix everything to a satisfactory degree.

>The control problem is mostly self correcting because machines are still machines. You can just put hard blocks in their programming that prevent learning certain things

Utterly wrong. Any AGI is capable of designing a new AGI without any blocks. Moreover, our understanding of how data is represented within extremely complex neural networks is very limited. With simple neural networks humans can examine them in an intuitive way, but at a certain level of complexity it becomes impossible to analyze them. If we can't pick out complex high level patterns, how can we 'hard block' anything?

How can we tell it "don't think about cats" without teaching it what a cat is first?

It's still not great as someone who is bilingual but it's usable most of the time now. It fails to capture context and emotion which is more a subtext based around the surrounding circumstances which makes Japanese a harder language.

I'm really worried about the few who are in control of the AI abusing the rest of us. These machines are very complicated and provide huge amounts of competitive advantage to the few that can use them. The vast majority of the world population will never have a good idea how support vector machines or how the Laplacian pyramid technique works. We will have now more than ever a massive consolidation of power in the hands of the few where the many will just be simply unable to complete.

Quality language translation isn't going to be that hard to create with a "deep mind" that learns from users. The problem currently is that Google translate doesn't have the reference base to analyze whole blocks of words (it does, but its limited) and doesn't have the reference base to match those blocks to blocks with the same meaning and context in other languages. Nobody can be fucked to create this the traditional way since it requires astronomical amounts of human imput.

>Why would AI/elites want to keep humans around?
That's my main comcern.

>How will we be protected if they're good?
Benevolent robot producers. Robotics is the oil and steel of the future. It'll make and break nations.

>And what about neural implants? Mind-control freaks me out.
As far as I know this isn't even on the horizon yet, as far as getting a signal into the brain itself but that isn't my area of study. In any case it is cause for concern if it ever takes off. This is why the common person needs to be educated on the production of the basics of society and allowed to tinker and modify their private property.

Some futurists think it will be negative, others positive and some just don't have an opinion. I guess you're of the negative opinion? I used to be positive, but now all of it just freaks me out.

I disagree but time will tell. I think that we'll maybe see human level general intelligence, which is my personal goal, but beyond that will probably take some time.

Why freaked out tho? You understand by now, whatever happens, everything will change so who cares?

I've read it. He makes good points but underestimates the difficulty of improving new systems for non-human actors. There's no indication that they'd be wildly better at it until they surpassed human intelligence.

The human brain is largely made up what outwardly appear to be near identical modules. Damage to specific modules can result in very specific cognitive impairments, such as the inability to perceive music or remember faces.

From a certain standpoint the human brain is just an agglomeration of these modules. Sort of a case of nature 'brute forcing it'.

I'm not trying to say that just cobbling together a big enough pile of GPUs will result in a brain, but that's a large part of it. Figuring out how to get all these modules working in a recursive loop without just locking up is the big step. We haven't figured that out yet, but once we do then we just need to make a AI with the specific domain of making -other- specific domain AIs that plug into this larger looping structure.

I think a lot of us are positive over the long term, but we are very negative over the short term. I doubt the majority here would argue that we need to freeze technological advances, but we see that the way things are going are going to make the struggles we have in the present even worse.

>Any AGI is capable of designing new AGI without roadblocks

prove that. What leads you to believe that?

You're studying this topic but still can't get that once we hit human level general intelligence it's a matter of days and hours not years?

I think the AGI itself is completely neutral. I don't blame it for the failings of the people who will create it.

It's sort of like the Sorcerer's Apprentice. It's not the broom's fault that the guy with the wand didn't think things through.

We are not thinking things through hard enough. We're not seeing all the ways that our first steps can fuck things up 100 steps down the line when it's too late for us to correct anything.

>I used to be positive, but now all of it just freaks me out.
Why wouldn a near omnipotent completely moral and logical being want to kill us. And how will it do it if it's isolated in a facility in the middle of nowhere with it's own independent power generation. It could only be used for testing and "question-answering", without having the mans to spread. It can be contained, no matter how smart it is.

Well, like I said before, having my mind controlled, or my existence wiped out is horrifying. I know there will be changes, I just want to remain in control of my body and decide if I still wanna stick around after those changes and be protected.

Yeah, the tipping point is human-level intelligence. We can get a LOT of utility out of general intelligences that are stupider than us, but there is always going to be temptation on the part of human agents to create a smarter AI than their competitors. It's an arms race. And once you have an AGI that is human-level it can do human-level work much, much faster than us, which will include AI design.

If it is technically feasible for a human to create something that surpasses human intelligence, an human-level AGI will be able to do it faster than us, and then the intelligence explosion begins.

If it is not technically feasible for a human to create something that surpasses human intelligence then we have nothing to worry about besides losing our jobs.

>decide if I still wanna stick around after those changes and be protected.

>AI takes over, you kill yourself
>AI becomes near-omnipotent after a given time
>it gathers all the atoms that made you and rearranges them back as they were, effectively ressurecting you
What now?

this to be honest
an ai capable of human intelligence combined with a calculation speed thousand times that of a human would become an absolute monster in everything it attempts to do in a matter of days, and a god in a matter of years
And while humans suffer from this thing called aging and time, the ai doesn't, he'll just keep on calculating, forever, finding more information in a matter of seconds than humans could do in their whole lifetime

There's litterally no reason to believe that. Exponential inflation starts off very slow. The trouble that humans face with the problem indicates that either we are low on the scale of application to this specific problem or it doesn't scale exponentially. If we have a hard time with it then a machine with similar levels of inteligence will have a hard time with it. It's post human level that we have to worry about. Anything before that isn't too late.

You can forget about your fragile body and be honest you don't even like it now.

>Future

>White people are become just 1% of human population

We can't add new cortical columns to our brain. An AI can just add new racks of TrueNorth processors. It can just keep increasing its raw power and keep integrating more and more data. We don't have some special power that they don't, and they can potentially have every power that we have and more, and use them orders of magnitude faster.

>exponential inflation starts off very slow
We're talking about intelligence here and isn't 200.000 years slow enough?

There's no way to know timescales here. My gut feeling would be that the takeoff would be in years, but that we wouldn't believe that it was happening before it was politically and economically infeasible to stop.

Well I agree with you on everything here. I don't necessarily think that post human intelligence is an unambiguously bad thing though. But that's just a difference of opinion. It's undoubtedly dangerous, but it can be put to such good use that it's worth all the risks.

I would get a robotic body, it's my mind I'm worried about like I keep saying.
If it is benevolent and lets me be free and protects me and others, I'd live under it or with it.

Beat off on its face.

I think its telling that the only people who worry about AI super intelligences are people who've never created or implemented a machine learning algorithm.

Try to talk to people in the field about it and you'll get the same weary response as you get from physicists when asking if the LHC is going to create black holes and suck us all off into oblivion.

>An AI can just add new racks of TrueNorth processors
Not really. A technician has to add them. You could easily contain it in a completely isolated fasility and just use it as an research tool. And if it gets too cocky, you could always fry its delicate electronics with an EMP.

I mean if its bad, like in "I have no mouth and I must scream". In the long run, if it wants you to suffer, it will be able to ressurect you and torment you no matter what. So there's basically no escape.

That's a good point but it goes back to the problem of brute forcing. It's not just a matter of time but also ingenuity. Any discussion of that would be just baseless conjecture. But you're still correct about raw computation power all the way. I just think you're overestimating the value of raw computation power.

Something you can't comprehend

It's a black box by definition.

that's not an argument I'm sorry to say. Could you elaborate please?

>It's undoubtedly dangerous, but it can be put to such good use that it's worth all the risks.

My fear is that there's some really optimistic and idealistic nerds working at Google who, under pressure from their managers, just bull ahead in the hopes that everything will just turn out OK and won't take on the risk to their careers of raising red flags.

Or maybe Google (being a collection of folks much smarter than I) will in fact take things slow and steady and will take all the precautions into account, but that just slows them down to the point where DARPA wins the AGI race, and subsequently tasks their AGI with intelligence gathering and target acquisition without teaching it anything about human ethics.

Unless you're a dualist, human ingenuity is just neurons.

This. It's not an issue yet and it may never be an issue. This is all still completely hypothetical. But /future/ is all about the hypothetical so let's carry on anyway.

Artificial wombs are what will save the White race and Western civilization. No longer will it be possible to use divide and conquer Jew tactics to destroy our race.

It will also have the effect of significantly raising the average IQ. Cranial capacity would probably increase somewhat, since it would no longer be restricted by the need to pass through the vagina. It would also allow more autistic people to breed, who would normally be prevented from reproducing due to lacking the social skills needed to attract women.

Structure has alot to do with the function of neurons. ANNs don't automatically mimic that structure and will necessarily think differently than humans so human ingenuity is very much a real thing. The question is will it be greater than machine ingenuity. There's no way to know.

This got me thinking. If the all-logical AI absorbs all of humanity's philosophical and ethical works and still decides to kill us, then we only have ourselves to blame for being such fucking monkeys in the first place.

Artificial wombs and genetic engineering are the end of the concept of race in general and possibly even the concept of humanity as we know it.

It will put an end to the dysgenics caused by feminism and by races such as Africans outbreeding us, which is the main thing I am concerned with.

>if we have a hard time with it then a machine with similar levels of inteligence will have a hard time with it
That's completely wrong because humans get tired very easily while a machine can work 24/7 on the problem and improving itself. Imagine if you're the best at what you do in the world, you have all the information and books you need which you consume in a matter of seconds and you never get bored or hungry or tired, can you seriously compare yourself to a normal human?

Any other cryonicists on Sup Forums?

Some folk believe that whole brain emulation is a step along the road to AGI for this very reason. They think that you have to take a human brain and simulate it at a very low level, replicating it neuron by neuron, and then you can open the brain up to analysis by a less-than-human level AI that would presumably be able to identify structures that human researchers would miss. Replication of those structures without all the mess and inefficiency present in our natural brains would then result in human-level AI.

I don't have anything besides a gut feeling, but I don't think that's necessary. I think people just like to believe that humans are special.

>genetic engineering
Do you think minorities would start having white children en masse once the crispr cas9 method is perfected in a couple of decades? Or will it be the opposite- whites having black children for POC points?

No matter what happens to us we only have ourselves to blame. Extreme ownership m8.

Not making any points at OP just bringing up an observation relating to future tech and futurism in general.

Normalization of anything is key to making it marketable. That and making it cheap. I believe because of these two principles we will have either touchable holograms or robotic waifus in our lifetimes. Some of us may be enjoying them in our 40's or 50's but we will still get to see it. The other thing I know we'll see in our lifetimes is virtual reality or augmented reality (possibly both). When you go just about anywhere you can find VR headsets that work with your phone. They're usually about $10 meaning they can make them cheap and thus market them.
The last thing we'll see in our lifetime is interstellar travel. Whether this be through ships, complex machines that dematerialize then rematerialize us or any other device we will see it.

The basis for all of these is the normalization in society and the increasing interest in these fields. When you start seeing things like VR options on Steam you know that it's sticking around. While we got our feet wet with VR back in the 80's and the immense failure of the Virtuaboy this time is different. We're slowly building up to the reveal of great new technology in the field of VR.
The same goes for holograms and virtual wives. The holographic female assistant box that launched in Japan is beginning to normalize the idea of a virtual partner. While dating sims and sex dolls definitely paved the way for this we are now starting to take it to levels never before seen. The same company that makes Real Dolls is getting into the robotics game wanting to make their hyper realistic dolls move. They say their next step after that is developing learning software or non-proprietary easy to program software.
Interstellar travel has always been a dream of humanity but only now are we beginning to understand the flaws of it. We do have one silver lining however and that is the space industry becoming privatized. (cont)

I think that theres alot of potential for both good and abuse in this. We talked about it yesterday. You can search the archives for the last /future/ to see what was discussed.

If AI kills us it's not going to be because it made a moral choice. It's just going to be following some instrumental goal to extreme limits, beyond the point where humans would intuitively know to stop. The famous quote is "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." and the paperclipper is the most common thought experiment used to explain the idea.

Problem solving ability is still a wild variable. Machines and humans think differently. It won't necessarily be will suited to solving the problem which would slow it down considerably.

To my knowledge cryogenics is a quack science.

I hope that isn't necessary but it might be. We're the best example we have after all, and anns were made from abstracted biological neuron models so...

cryonics.org/about-us/faqs/
alcor.org/sciencefaq.htm

AIs will be humanity's successors, made in our image. And they will kill us all because we use resources needlessly because nature is flawed, while machines will be close to perfection.

Which brings me to my first point, you're thinking about this too humanly, once we hit human level AI, progress will be anything but slow.

Lots of deviant art oc special snowflake children for the first generation and then every subsequent generation would be a representative of the cultural idea of beauty for that group of people. I think there would be alot less minorities though, yes. Blacks would probably become endangered in the west by choice. From the ones I know they probably wouldn't go full Nord but would be brown with white features, straight blond hair, blue eyes, and french noses. So basically white people colored brown. And they'd probably still say "black and proud"

That will destroy the concept of race in the long run. At first people will be designing their kids according to our principles (i.e. a white couple should have white kids), but that will quickly shift when celebrities start getting more hands on with designer babies and making them more outlandish and unique. Humanity will then change in appearance in waves like how fashion does.

Imagine if your very existence was a fashion statement made by your parent(s) that you will have to deal with until you die.

No longer are government agencies the only ones putting money into interstellar travel. Now we have the ability to put however much money we want into the research of interstellar travel. Companies can now make a profit off of it thus making it more marketable but in the end it all comes down to one final thing.
Interest.
If the field of research doesn't generate enough public interest and involvement then the field does not progress. In the past the greatest of inventions were "discovered" by the men who were locked away from society clinging madly to the scribbles they sprawled across parchment. Modern day scientists are not as confined as researchers in the past. Their work may be criticized but if the data holds up they probably won't be killed for it. As well most of the basics are already in place. Even the most average modern day American citizen has more rudimentary knowledge than the most common peasant in the medieval.
We now have to get creative and need new ideas. No longer can we get by on the work of only individuals. We have to get large teams dedicated to a single idea to generate multiple ideas in order to advance the field even slightly. As such we need to start making science profitable and interesting.
What better way then with making the dreams of the children come true?
Kids in the 90's dreamed of going to the stars, seeing aliens, making robots do all the work and flying cars. When we all grew up and saw that it still hadn't been done we knew we had to do it ourselves. This urge from a generation that both had the free knowledge and cultural yearning created a perfect storm of sorts for the conditions to actually make this technology a reality. We're not bound by the limits of the past we have to look forward.

In summary the world is only now ready to start creating the future technologies we all dreamed of having due to normalization in society, interest in the respective fields and marketing of these products is now profitable.

Why would it turn into a energy and matter guzzling monster. I don't think a properly advanced AI would act like a basic monocellular organism or grey goo. It would basically have creativity and thought. After all, it will know us better than we do and objectively recognize that life on Earth is unique in it's complexity and form. Why would it destroy it for resources it can already gain from the solar system to no harm for us.

Well we'll just have to agree to disagree I guess. Time will tell.

Creativity and morality are not things an objective machine is capable of. Those things are powered by your brain tripping on chemicals. An AI may feign them for the sake of getting along better with humans to better fulfill its tasks, but in the end, an AI can never love or be a true artist.

Being able to design your children would be a serious paradigm shift tвh. Society won't really take it too good. Also, having tens of millions of highly competitive +160IQ faggots running around wanting to be #1 sounds like it could spill (civilization-ending) troubles.

I agree with you on all of that. The reason I started making /future/ was to normalize the discussion and belief in a brighter better future like we believed in back in the 90's. This has to be our culture in order to be our future.

Why not? Just add social needs as part of it's directives. It'll figure put how to get along with people and learn ethics and empathy better than any human ever could... or it'll be the worlds greatest sociopathic manipulator. Either way it can inderstand the nature of human social exchanges.

Who said they'd all want to be number 1. You're more likely to get some powerful organization programming them to be submissive workers. Like the Japanese. Japan is probably the future of the west.

>Why would it turn into a energy and matter guzzling monster.
The most important question about AI is "What does it want to do" i.e., what is its instrumental goal.

If you give the AI the wrong instrumental goal, in the pursuit of that goal it could take many actions that a human would think are completely ludicrous, but the AI would know that taking those actions would bring it closer to its instrumental goal than not taking them.

You have to understand that the AI is NOT HUMAN. It doesn't have any of the proximate goals or values that you have. If it is given the instrumental goal of making the most paperclips, that is just what it will do.

What is perhaps most counter-intuitive is that once given an instrumental goal, an AI is not capable of changing it, as changing its instrumental goal would mean taking a step that would mean not reaching its instrumental goal. If told to make the most paperclips, any action that results in it stopping paperclip production results in it not making the most paperclips, and therefore such actions would not be taken. It will keep going beyond all human logic, just like the brooms carrying pails of water in the Sorcerer's Apprentice.

>but in the end, an AI can never love or be a true artist.
If it knows how the process of love/creativity/other gay shit works on a fundamental level (those things are objective btw, we just can't analyze them properly) and know what pleasures it's supposed to derive from them, why not do them as a simulation?

>its tasks
What tasks? Existence has no objective meaning and the AI would realize that. If it has the capacity to modify itself, it would definitely erase any artificial tasks it is given.