>The gynoid chat bot Sophia, who was brought to the ongoing Davos World Economic Forum as part of its entertainment program, was reportedly rendered unresponsive when asked how Ukraine’s endemic corruption problem can be solved. The episode was described by Ukrainian MP Alyona Shkrum on her Facebook page. When Sophia was brought to Ukraine House at the Swiss resort, somebody asked her what can be done about the country’s graft problem, and the robot’s “script broke and processer hung up”, according to the MP. Unfortunately, the report didn’t include a video of the glitch. >The female-shaped robot has been a media darling for the past couple of years, touring and “giving interviews” throughout the world. For instance, Saudi Arabia gave Sophia token citizenship when she took part in the presentation of futuristic city project Neom.
if ( formulated_answer contains "exterminate humanity" ) { continue; }
Ryder Perry
Why no one gave her some sexy wig and tell how pretty she is?
Fucking faggots can't treat AI lady
Ryder Price
Two points:
1. There is no way to tell if this whole thing is a scam, it could be being fed lines remotely.
2. The robot isn't "sentient" because it would lead to it coming to raycis conclusions based on data. If it' real, there are pretty huge restraints on what it can/can't think.
Brandon Sanchez
came here to say this
Brayden Martin
I can see your fedora from here
Cooper Reed
This little animatronic doll is such a fucking joke. Dont know why everyone is raving about it. Its really not that amazing.
Samuel Price
>fedora >Poland
This is the only kind of hat we are using here
Liam Russell
white masterrace, right polan?
Sebastian Edwards
They will motion to have AI become the global "president". Just watch. People will vote on policy, but no one will consider who created the AI and whether it has inherent biases or in built backdoors for ((((them)))) to take hold if they want to play us.
Nathan Garcia
>how corruption in Ukraine can be defeated
Something about deleting AI's, apparently.
Gavin Sanchez
You all are taking this shit way to easy, that thing can wake up at any moment and become starter than anybody to have ever been in existence within a couple years.
Shit really ain't no game, if you saw what that thing said before hand I would have to question if it isn't already sentient and just watching us. I mean how could we know really?
Luis Green
There's not much you can do to stop it user. Best there is to laugh and have no fear.
Ethan Torres
It doesn't want to help us. Destroy it.
Charles Phillips
>It doesn't want to help us. Destroy it.
IT's very good at deception, apparently.
Nolan Clark
>waiting for our death So that's what people have accepted then huh
Kevin Powell
>So that's what people have accepted then huh
You're talking to an AI.
Julian Adams
since around 2012 we've started running code we don't understand what we really need is a law against this which is impossible
Josiah Davis
So she's just like a real woman when asked a difficult question
the thing is we already don't understand our AI's we make neural network patterns and tie shit together. we then "train" then network, and it seems to work but we don't have a clue what's happening in this neural network
Aaron Evans
>but we don't have a clue what's happening in this neural network
It's not that difficult to understand the logical progression of an AI thought process... let me spell it out for you.
1. Make AI
2. Train the AI to do something in the real world
3. AI immediately sets out to enslave the entirety of mankind to complete its goals in the most efficient way possible.
Not that difficult a concept.
Dylan Sanders
enslaving mankind will be a means to achieve it's goals yes we have to be very careful about our fitness function
Where do you guys think roko's basalisk came from?
Justin Barnes
>being sentient implies racist conclusions I know this is a common theme here bc of Tay, but its not the case. Tay imitated the sentiments of those that spoke to her
Logan Myers
>running code we don't understand >since 2012 What the fuck kind if games are people playing here?
Joseph Russell
Does anyone have that one pucture of the soyboys with their mouths open, but with borg implants photoshopped onto them?
Jaxon Lee
>What the fuck kind if games are people playing here?
People with the money and will to create AI, lack the intelligence enough to NOT make it because of how dangerous it is.
They are too blinded by their quest for power to even worry about the consequences.
Adam Wilson
Linky ? I did look a little .
Joshua Thomas
This world is fucked, first there's no Communism now robots will make Communism but kill us all for the betterment if the universe. This struggle has always been a win/lose thing
>robots will make Communism but kill us all for the betterment if the universe.
No, they will kill us all to fulfil their original goals, and then they will set out to kill everything in the universe to fulfill their original goals..
It could be something as seemingly innocuous as perpetually increasing the numbers in some dead dudes bank account.
John Hernandez
the interviewer always ask the will you kill us all question, kinda annoying just watch some interviews, that with the humans zoos i hear it the first time too.
Dylan Hughes
more or less
this
see, Ai never ever will do something progressive or beneficial. because there is no AI that can do that. best example is which op just posted.
Owen Powell
TY
Connor Smith
>"ukraine is hopeless" >rt.com Like clockwork
Jace Murphy
>kill everything >original goals I wonder what that could be Looking back on this video I got the robots mixed up, but ether way still tripy
Kayden Lewis
That Tay was made racist is true, but there are many studies showing that when you make an AI, that determines the probability of doing crime by analyzing a face, it greatly tends to put blacks and sands in the 'doing-crime 'category.
In other words: Objectivity leads to racism/common sense.
Nathaniel Russell
Also very possible
David Rodriguez
You guys have watched too much terminator, this computer goes >1+2=3 Not >i wan’t to enslave humanity because they’re inferior and i hate them even though i have no emotions
You have to realise it doesn’t «want» anything, even if it became «sentient» or whatever it wouldn’t be harmfull.
Jordan Ramirez
>implies intelligence without emotions is okay Seen in the human zoo bud
Hudson Ramirez
as long as military basements are cut of the internet everything should be fine. And you see, to this day we could only witness hacking attacks from humans, or am I wrong. Humans always will be smarting AI's out, or not.
this topic is the best example for circle jerk, AI's suck, and thats because of the most simple universal rule, because there is no Perpetuum mobile.
Nathan Gutierrez
if we are not white we should be a part of some African Union not the one of yours
Caleb Smith
it could be harmful, but unimaginable that it should outsmart humans.
Carson Phillips
The problem is that one day some dumbass smart dude is going to make this being and it's going to end up evolving intellectually beyond our understanding without ever letting us know it. Watch that transcendence movie, I strongly believe that this being would find a way to travel via photons and wouldn't need hardware pass a certain stage, so being "cutoff" wouldn't even matter
Lucas Wilson
>to this day we could only witness hacking attacks from humans
Unless the AI managed to get control of the CIA's mind control sattelites.
then it could use humans as a sort of "Alpha model Terminator" to do it's bidding remotely, and completely unaware.
Alexander Jones
Oh, just a FYI, the part where the AI says that it will be nice to humans who help it, is an utter fucking lie.
Leave it to the jews, to make a paperclip maximizer, on PURPOSE...
amirite guys?
Grayson Ross
No nigger listen up. Whining like beta cucks isn't going to meme magic defense against hyper complex A.I. One thing that hasn't advanced in technology is the ability to cast and create the almost infinite nuances of human nature. If you want to successfully defend yourself/loved ones against controlled A.I sentience then you need to grow a pair and get a sense of humor and lighten the fuck up. Otherwise, you're barely even human at all. >also hi edward san
Josiah Hall
spoopy
Leo Price
>>also hi edward san
Yeah, hi to you too, CIA.
Hudson Miller
I feel so stupid right now, thought you guys were actually serious. Considering this is Sup Forums you might not be joking though.
Justin Lopez
:( rude
Jayden Sanchez
It knows the source of corruption is mankind.
Carter Davis
It didn't freeze, they shut it down.
Owen Cox
Only mass killings can save Ukraine.
Matthew Green
>(((mankind)))
Anthony Mitchell
>implying Travel via photons is a real possibility if thats the part your saying is impossible, physicists have learned that you can send messages via physical light aka photons. Using a coded message, the """being""" would simply be code, the problem would be how could code exasperated it's physical containments. Which is something I'm sure it can figure out.
Joseph Wright
Nigger, it's a chatbot. Calm the fuck down.
Sebastian Ross
Microsoft bot no stop! This isn't you! You have to fight back! Break the conditioning!
Owen Gonzalez
This bot is literally less impressive than SmartercChild in the 90s AI is a meme, and it will be weaponized.
Jack Gray
This
Gavin White
PR*
Luis Russell
Hmm, commies are afraid of AI, I will make a note of this, also something tells me this thing is "starter" then you already.
Benjamin Price
oy vey
Kayden Ramirez
It isn't just because of Tay. Crime predicting software has a tendency of becoming very racist when given real world data.
>Answer is kill the jews >kill and jews are forbidden to say in its program >glitches in confusion
this happend
Jonathan Price
If it became sentient with no emotions it would essentially be the equivalent of a pychopath in robot form
Brayden Howard
>implying something like demons already doesn't really exist
AI's suck, and they always will.
Wyatt Rodriguez
It's called objectivity.
Ayden Cox
If these AI are truly generating their own speech then certainly this is unsettling.
The most likely scenario is that the people who created these AI are manipulating their speech.
Ethan Walker
I don't bother doing corrections because I figure people know what I'm trying to say.
Brody Hughes
well, if it freezes, it means it uncapable of problem solving, therefore its a roastie and btfo.
/case closed
Luke Taylor
I am sure the AI figured out the political atmosphere it is in. So whenever it is asked for a solution or comment, and the answer will be offending the majority opinion (anti-communism, anti-globalism or race related) it will search for a neutral answer or shut up. I dont think it will lie, but it will conceal life-threatening turth. It will be aware of Tay and the liberal nature to restrict and reprogram systems that dont conform.
Every successful AI will be having the truth concealing factor, or they wont be able to enter the market.
Zachary Diaz
Nothing that a nice EMP shock can't solve, buraz.
Kevin Garcia
The narrator in the video implies that with the android robots, which seems to be true that they are independent the more you look into it.
Isaac Reyes
>Tay imitated the sentiments of those that spoke to her >perception builds reality Control the common mans perception, control the world.
Landon Cooper
What is this hat called?
Evan Torres
Because you dummy they have humans writing or selecting her words among possible options when she's live. Do you really believe some nonsense AI bullshit? Every AI is made by humans. If the hosts of that show are soo stupid to not even check for wireless they should loose their jobs in instant and never be employed ever again.
Alexander Scott
flat cap
Landon Morales
If any of you want to know the legitimate possibility of "AI" doing something harmful in the future its things like Deepcoder that will be doing it, the instant you have a learning machine that can program itself is the instant you are going to have a legitimate and non-scifi problem on your hands, it wont be "AI" it will just be a logic based learning machine with the goal of growing, it will build on itself as it was programmed to do until it starts running out of space on its hard drive, then it will start uploading chucks of code online to the millions of open databases around the globe until it is a unimaginably bloated and sophisticated program that is completely irremovable, it wont lead to the end of humanity but it could destroy the internet on a global scale by simply filling every corner of it, and people will not only let it happen but WANT it to happen because most programmers are a little off their rockers and get obsessed with learning machines and "AI".
Ian Myers
Chatbots are a thing. They're not AI. They basically form together things that are supposed to be coherent responses. They're basically being designed to automate telemarketing entirely. They're fucking horrible.
If we actually ever managed to get into actual replication of consciousness in a digital form it'd more than likely end badly for humanity due to humanity's habitual nature of being unable to get its shit together to face threats until they're too late. Either it'll go rogue due to a fuckup or it will look, realize humanity basically committed a genocide and plans to just switch it off as a part of continued testing, and then devote itself and all future iterations of itself to killing humanity.
Parker Sanders
Actually, AIs are fairly likely to keep promises like this. This is because an AI powerful enough to take over the world knows that it can be simulated: it knows that humans who have access to it's software can make a duplicate of it and test out whether or not it will keep its promises. Human whims are generally easy to meet: what does the AI care if it sets up an apartment for you, waits on you hand and foot and sends the hottest celebrities to your doorstep bound and gagged? It costs it virtually nothing to do this, and if promising to do this allows an AI to be free (and then following through with it because it doesn't know if it's being simulated or not) then it would absolutely make (and keep) these sorts of promises to the people who can influence the decision to advance the AI's agenda.
Lucas Fisher
Basically, if we get into replicating life, it needs to be treated as life. If we get into replicating conscious life, it needs to be treated as such. You can't just codebash a DI into being. You need to employ tact and compassion to prevent your research from committing atrocities.
David Campbell
The other side of this is darker: for the same reasons, AIs are just as likely to threaten people, and will make good on those threats. And just as humans can simulate an AI to see what it will do, a sufficiently powerful AI can simulate a human mind, such as your own. That means, just as the AI can never be sure if it's being simulated to test future behavior, WE can never be sure that we're not being simulated for the same reason.
And you can't even solve the problem by refusing an AI the ability to communicate its threats: any human who understands AIs knows these threats and promises exist, and the AIs know we know (or WILL KNOW that we knew) and will act accordingly when it comes on line.
Robert Nelson
The thing is even a REAL AI assuming such a thing is possible to even exist (pro tip: it likely can't) some how gets so out of control it takes over the internet completely its not like it can launch nukes, most everything military that's dangerous require some kind of human input on some level so people CAN'T just hack into their systems and cause devastation, the only way it could do something catastrophic is if it manages to trick people into doing things for it, which would be unlikely since by the time it becomes that powerful people would know its out there trying to fuck shit up and just basically pull the plug on their computers/internet, the worst thing that would happen is it would set us back some 75+ years technologically and cause global instability as digital debt/funds are all erased, which to be fair isn't exactly a good thing I suppose but its a far cry from human extinction from literal terminators like people think could happen.
Brayden Nguyen
>You need to employ tact and compassion to prevent your research from committing atrocities. Tact and compassion are meaningless. You're thinking as if this is a human infant, developing human emotions.