Robots are learning to be racist AND sexist: Scientists reveal how AI programs exhibit human-like biases

>Robots are learning to be racist AND sexist: Scientists reveal how AI programs exhibit human-like biases
>Researcher gave AI that learns from online text word associate tasks
>Prompted it to link certain words with ones that are pleasant or unpleasant
>Then gave AI a list of white and black-sounding names for the same task
>Linked black-sounding names, like Ebony and Jamal, with unpleasant words
>White-sounding ones, like Emily and Matt, were linked with pleasant ones
>Said AI is learning biases that make humans match words to people's faces

Other urls found in this thread:

phys.org/news/2017-04-biased-bots-human-prejudices-artificial.html
usatoday.com/story/tech/2015/07/01/google-apologizes-after-photos-identify-black-people-as-gorillas/29567465/
singularityhub.com/2014/12/15/worm-brain-simulation-drives-lego-robot/.
bbc.co.uk/news/technology-39533308
twitter.com/NSFWRedditGif

>Scientist raped a journalist

Source?

found one phys.org/news/2017-04-biased-bots-human-prejudices-artificial.html

That only makes sense though. For instance if you did text analytics based on Sup Forums and the majority of the cases where the computer read the name Tyrone it also saw "give back the computer" or "stop stealing", of course there will be negative association. I don't think this is the AI picking up a negative bias as much as it is just spitting back at you what you feed it. If you don't want it to be racist, you also have to feed it examples where Tyrone is being used positively. Likewise for sexism.

Does anyone know what text data exactly they were training the AI on? Tweets? Forum posts? Police reports?

If an artificial intelligence can become racist and/or sexist on its own, does that mean that racism is normal?

>2040
>Police bots use crime data to analyze threat levels
>Police bots constantly arrest black people
>They're data analytics are so good they end up having a 100% incarceration rate amongst their arrests
>They get banned for being "racist" even though they did nothing but analyze pure data
facts be racist yo

Of course racism is normal. Doesn't necessarily mean its right/ok, but it is normal

>police bots lock up Sup Forumstard nu-nazis
>das jewish conspiracy! we dindu nothin! when will genocide of the white man end?

Probably wouldn't arrest them unless raycism is now illegal

Absolutely.
It turns out that objectively thinking machines are coming up with the same racist ideas as whites.
>Google auto labeling system confuses black people as gorillas.

usatoday.com/story/tech/2015/07/01/google-apologizes-after-photos-identify-black-people-as-gorillas/29567465/

Exactly this: garbage in, garbage out. Fot instance, when microsoft released that AI bot and was hijacked by Sup Forumskin edgelords, the AI became--surprise surprise--a shitposting racist edgelord. AI reveals baises from its training database the same way children will learn into the biases of their upbrining csurpris.

I actually remember reading about how the fear of being called racist was actually why the system detected them as gorillas
Basically the system detected faces and if the face was dark it automatically assumed gorilla
Thats because the team never trained the AI to differentiate black people from gorillas because no one in the office was going to do that in fear of getting called racist for basically implying that black people look like gorillas

tfw you realize Earth is racist for giving Europeans guns and sailboats and compasses while leaving Africans with flint tools

This statement is about as dim as the LED horse dildo up your ass. The fact that an AI was unable to correctly distinguish a group of people does not show it "objectively classify things" but the contrary, that is not robust enough to make proper classification of the images being fed--i.e. a dysfunctional robot.aTo think otherwise would be the equivalent of feeding an ATM machine 100 bills and 1 dollar bills and be happy that "it objectively classified all bills as 1". But if you are happy letting such "objective" recognition do your balances, i am not surprised that you'd fall for logical bankruptcy.

Get a load of this faggot. Read some bigger books, kike.

AI does not exist. Except this, maybe (which is quite disturbing) : singularityhub.com/2014/12/15/worm-brain-simulation-drives-lego-robot/.

It's nothing but very sophisticated algorithms, with no consciousness at all. If your greentext is true, that program just learned from the corpus composed of Sup Forums thread and look-alikes, thus the bias. At least, and given the corpus is actually representative of the web, it's a good way to provide insights on the pathos of our times.

>robots only follow logic
>it's racism, not fact

marxists on suicide watch

>police bots lock up Sup Forumstard nu-nazis
Why? They're not the ones causing a disproportionate majority of crimes.

Except it's not pure logic you dense mong.
The AI isn't analyzing objective data but fucking words which are inherently cultural, defined by usage, used by cultures and used in cultural contexts.

If it analyzed the internet it would consider white people to be school shooters, despite it not being the case.

>If it analyzed the internet it would consider white people to be school shooters, despite it not being the case
Except they are

>it's Sup Forums shits on white people episode

Do you have source for this or is it just a fun apocrypha?

What crimes do Sup Forums commit aside from muh racism?

Sorry white male, black people are not your pets, your racial bias is going to teach her racism does not really exist.

Just goes to show it's logical to hate niggers

>aside from
A crime is a crime is a crime.

Breathing is now a crime.
Stop breathing. Crime is a crime.

Just a theory. But it makes sense and knowing the SJW culture in some of the inner parts of Google I wouldn't be surprised if it was the case

Reality is racist, and that's the big problem.

Epic

Now get the necessary legislation going.

That's irrelevant, since the AI isn't reading reality.

Well it isn't, but people's perception of reality is racist too.

This pisses me off to no end. It's literally a non-story. It's not even a fucking novelty as anyone who knows even the absolute basics of how machine learning works could tell you that this would happen

>implying
Sup Forums is a board of peace, don't you know this?
Also, call me when white males commit a more crimes in a year than black males.

racism is not a crime, you dumb bitch

>hate crime is not a crime because I'm too retarded to know that
OK cleetus

I hate cock sucking faggots
You are a cock sucking faggot
Therefore I hate you
Sue me

You hate yourself, not me.

>N-NO
>U
Pathetic.

How is it biased if it is honestly learned from actual data?

fuck, you're stupid. hate crimes are are categorized only in addition to a crime that's already taken place. being racist (correct) by itself isn't a crime, no matter how offended your wife's son is.

Yes, yes you are

>Crime data is racist.

>robot with ai gets fed statistics by doj
>knows darker people commit more crimes
>ai avoids dark people

how is that racism? It's intelligence.

>no consciousness
What if consciousness is very sophisticated algorithms, or emerges from sophisticated algorithms? What if algorithms or rather computer processes, immaterial compositions of data--are actually simplistic souls, made of the same soul substrate as human souls? What if the reasons computers work external to and independently from human perception is because they are in fact "magical" devices that operate on top of the same principles that give rise to biological life? What if "algorithm" is just another word for "magick"?

Reality is racist against your face, faglord

see democrats

Man, remember when the police raided that KKK headquarters last week? Oh, right.

>six evangelical socialist hillbillies speak for every "white person"

Imagine an AI trained only only black people internet.
>yo fuck you nigga we gon' eat dat bitches asshole and pop some caps in yo ass
then they put it into a robot. Imagine how that robot would behave?

What would you prefer, a racist white robo that prob just chills and arrests tyrones for stealing or a black robot that fucking robs and rapes everything it sees?

The problem comes though when people see it as their duty to go and correct what they deem to be offensive.

For example, if an AI bot with the objective of finding good names for
a children scours conversation and records and finds positive association between certain names and incidence of crime or how people perceive those with names like that in this society then they're going to give feedback that's beneficial. It's irrelevant whether the name is subjective if the vast majority of society seems the name to be better. It's just what's beneficial in society so the recommended names would be "good".

Along comes the problem where a tiny minority of society says "this is unfair, you haven't taken into account our views". So now the bot, in the sake of diversity, is reprogrammed to attach a weighing system to the opinions of individual minorities of that group, meaning their opinions are simply worth more. Now the recommendations the bot gives is 1. Michael 2. Muhammad 3. Fung Kim... Society will see these names as worse and they will be worse names for the individuals trying to live their lives but now the bot is apparently functioning as a non-racist. It doesn't matter if it performs worse or if the user receives less accurate information relating to reality and the affects on society, all that matters is that now it isn't racist.

This type of logic of "fixing society" already exists in dating sites where people go out of their way to condemn people for being attracted to white people. There are countless articles that call for dating systems to change so that non-whites have a better chance. I can assume that the weighing of race has been specifically artificially altered by these insane people.

I'm worried for the future because I feel as if people are so dogmatic in their cult of equality that they will infest every AI and assert their politics into it rather than just accepting that maybe people prefer their own kind or any other offensive belief

I think this all comes down to the fundamental values people have

Do people believe that corporate entities should be in the business of trying to further political agendas OR should they exist to sell products that appeal to consumers?

Stalking, breaking and entering, theft of flags

>police arrest Sup Forums
>implying Sup Forums does anything

...

It's too intelligent, shut it down.

from that picture what is far more likely is the detection algorithm working correctly, lad.

that guy has more gorilla-like features than some gorillas

>not ok
moralfag confirmed

You're not gonna see a wild dog get along with a house dog. It's perfectly normal for things to hate each other even if they're technically the same species. Only people that think they're better than you think otherwise.

Well that would actually be racist unless the bots also took into account the amount of black people who actually don't commit crime, considering police are supposed to protect and serve rather than hunt based on statistics. It would be smarter to train the bot on qualities which make a criminal and those which make a law abiding citizen, including zipcode, income, family life, employment, mental state, the time of day a person is outside, prior charges, if theyre single, if theyre fat, education level, motor vehicle points, chemical odors on the person, etc. I'd say a lot more data must be collected on our criminals and on our population of innocents before we start trying to classify people. So far, we only really collection data on criminals and don't collect data on what makes someone not a criminal so we can determine non shared traits. And we can say for certain due to whites actually being in jail that it's going to take a lot more than training it based on race and age to have it properly predict crime with an acceptable accuracy level.

Tl;dr- It would be ridiculous to release a bot into the public raised only on FBI crime statistics as it will obviously produce high type II error since we don't have data defining what makes an innocent. We'd need a database of many citizens measuring every potential variable we can about them all and then have a target binary variable determining criminality. Then we can try to build a model from there.

bbc.co.uk/news/technology-39533308
>There is growing concern that many of the algorithms that make decisions about our lives - from what we see on the internet to how likely we are to become victims or instigators of crime - are trained on data sets that do not include a diverse range of people.

Why did you program the internet to be racist?

The World White Web project and Algorithmic Justice League are appalled

AI educated on biased data is going to reach biased conclusions

This isn't even a "DATA BE RACIS'" thing it's just basic garbage-in-garbage-out

If AI displays bias it's because the data displays bias

If that bias is grounded in reality is an entirely separate issue

Who would have thought...

It's not bias if it's true

This, how do people not get it?

'All people are inherently good' is a far bigger bias than 'nigger names sound unpleasant' though.

>infallible computer logic proving niggers are shit
>SCIENTIFICALLY proving niggers are shit

QEQ

Or maybe the biais is trying to be objective on a subjective data prone to bias. And I fail to see how this is harmful since it's just uncovering the mindset of the society we live in.

Because we really should know what data the models are being trained on. It's unethical and lazy to train it only on the variables we have for criminal data and then apply findings to the populace of largely non criminals.

That'd be like trying to train a computer to see colors by only feeding it the color red and then telling it to classify all the colors in the rainbow. It could tell you what red is, but it won't be able to tell you what red isnt, or what the other colors are with any certainty. There's a bias in the way you trained it.

But like the other user said, if the bias is grounded in reality it's a different story, but to be certain we actually have to see what the models were trained on.

>Stalking
Who did they stalk? Links please.
>breaking and entering
Wut?
>theft of flags
Wasn't that the point of HWNDU flag?

The choice of what is and is not a crime is itself racist. That's why Black Lives Matter often demands that crimes blacks commit more frequently than other races be decriminalized. Loitering, spitting on the side walk, trespassing, noise ordinances, and many other laws are targeted at blacks which of course then leads to more blacks committing these 'crimes'.

Most of Sup Forums use illegal drugs or piracy bullshit.

[citation needed]

It would be pretty ironic if someone set up a wireshark and then trained a bot to predict internet crimes based on demographics of those who commit crime with their traffic and cops then deployed officers to monitor citizens and enter their homes when probable cause is reached based on whether or not citizens match the bot's predictions on what an Internet criminal is. Then Sup Forumsentoomen even using TOR would get their doors knocked in because they fit the description of a cyber criminal due to being white, watching anime, and browsing non mainstream websites, and be labeled high risk, much like how blacks get labeled for street crime.

wont happen. they will run proprietary software and have hardcoded filters for that.

I suggest you consult a dictionary for the meaning of stereotype. A.I.'s seek patterns for learning, patterns exist in stereotypes. Truth exists in stereotypes. Stereotypes, however, are not absolute truths.

Just use the archive.

>Sup Forums
>satania
Perfect couple.

>Just use the archive
So, you don't have any proof? Ok, got that.

>I'm to lazy to search so isn't no prove
Also is a wasted of time prove a common fact. This like taking a pic from sky to prove he is "blue".

>AI raped a journalist

Isn't rape if don't have a white dick.!

>>>common fact

>feed AI crime stats of every western country
>create a second Hitler

is this Sup Forumss wet dream?

I truly feel sorry for you if you believe this
T.non-white

Or maybe they're just correlating data and coming to the obvious conclusion that white males are the best

yeah, hardcoded filters work REALLY well for arbitrary data like text scraped off of the internet

>"biases"

>censored

nigger kike

Yes, but because of the cultural context, not because they are naturally predisposed to school shooting over other races

saying algorithms are "real" veers too much into spoopy platonic realism for my tates. i prefer thinking of consiousness as an illusion and humans as meat robots, though im not altogether opposed to the idea that even atoms have their own minute "souls" if thats even the right word for it.

>be me
>be computer
>be programmed as a neural net
>get fed a linguistic corpus
>get fed basic statistics
>notice that words like "harry" and "george" more likely to be associated with "engineering"
>notice that words like "stacy" and "sally" more likely to be associated with "kitchen"
>notice that words like "mohammed" constantly occuring around words like "terrorism"
>notice that "jamal" and "shaniqua" constantly occuring in high-crime areas
>notice high correlation of names like "jayquan" with words like "burglary" and "murder"
>notice "pajeet" and "raj" associated with bad tech and uncleanliness
>notice that "goldberg" and "stein" and "levi" much more likely to be associated with "banking," "media" and "social justice"
>lack years of socially-conditioned blinders that would otherwise force me to ignore all of this
>report results in most unbiased way possible

Now i'm the bad guy.

/thread

Problem isn't the program. Problem is the linguistic corpus it's trained on, and then what kind of applications you then plan on using the model for, as stated earlier in the thread.

Yeah, but literally any linguistic corpus that reflects reality is going to push a program to make these basic correlations.

If you have a neural net that *isn't* becoming more and more racist after more exposure to data, regardless of scope, you're doing something wrong. The point is to find subtle correlations. It's stupid for people in any application to expect anything but that.

>study author: Aylin Caliskan

for a sjw whore everything is rapist / racist. don't let the morons take over tech or we're all going to live in another dark age.

>if it reflects reality it's going to end up seeing things exactly the way i do

do you see the circular logic here?

>tfw you're a Matt

t. Tyrone DeMarcus Carter

But does that linguistic corpus reflect all of reality or a subset of it? It's not logical if the corpus reflects a subset to then claim that it outputs absolute truth in all instsnces, just like it's not logical to train a computer in North Korea on North Korean basketball players and then claim that the best basketball players are North Korean due to the computer being a simple machine running a simple algorithm without bias. Basketball in North Korea is a subset of the basketball world, and any inferences you draw can only apply to the scope of Basketball in North Korea. Likewise, inferences made based on given piece of text applies to the scope of that text, and nothing more. So while I would expect a machine learning model fed FBI crime stats to say that among your criminals a disproportionate amount are black compared to their numbers in the nation's population, it's a jump to then claim that the model is saying that blacks inherently commit more crime and that no other variables can explain this, especially when the statistics the machine is fed are already aggregated and completely negate to look at offenders individually and consider the plethora of variables about individuals that could be mapped other than race, gender, and age.