Racial bias infects AI decision-making

fivethirtyeight.com/features/technology-is-biased-too-how-do-we-fix-it

So much for reality and objectivity having a liberal bias.

What does it say about your ideology that it requires lobotomizing AI?

Other urls found in this thread:

archive.is/Vvhs8
archive.fo/iRCtG
archive.fo/hSWqK
criminalthinking.wordpress.com/2013/10/07/why-do-some-people-hurt-people/
twitter.com/AnonBabble

Here's the archive
archive.is/Vvhs8

Thanks I was in a hurry

>What does it say about your ideology that it requires lobotomizing AI?
no its about lobotomizing the developers, because (((they))) have determined that its the developers racial biases that are exposed through the AI

the AI isnt broken, the people who perceive a problem in objective deterministic mathematics have a problem of brokenness and need to fix themselves

the mad calling the sane mad

archive.fo/iRCtG
>Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: “A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.”

archive.fo/hSWqK
>But know: Machine learning has a dark side. “Many people think machines are not biased,” Princeton computer scientist Aylin Caliskan says. “But machines are trained on human data. And humans are biased.”

So TL;DR is "machine analyzes the raw data and comes to the conclusion that nigs perform the worst of the races across all the boards"?

Yeah, basically this.

So they either:
>deny it accurate data, giving inaccurate predictions
>reprogram it so it will inaccurately analyze the data, giving inaccurate predictions
joggin4noggin

>Jonathan Frankle, a former staff technologist for the Georgetown University Law Center who has experimented with facial-recognition algorithms, can run through a laundry list of factors that may contribute to the uneven success rates of the many systems currently in use, including the difficulty some systems have in detecting facial landmarks on darker skin, the lack of good training sets available, the complex nature of learning algorithms themselves, and the lack of research on the issue. “If it were just about putting more black people in a training set, it would be a very easy fix. But it’s inherently more complicated than that.”
>He thinks further study is crucial to finding solutions, and that the research is years behind the way facial recognition is already being used. “We don’t even fully know what the problems are that we need to fix, which is terrifying and should give any researcher pause,” Frankle said.
Basically, niggers are worse at being detected by facial recognition AI for some reason, and the researchers don't even know what the problem is. Fairly straightforward

>New laws and better government regulation could be a powerful tool in reforming how companies and government agencies use AI to make decisions.
Are you fucking kidding me
>Captcha literally has "Orwell" in it