>Can machines think – and, if so, can they think critically about race and gender? Recent reports have shown that machine-learning systems are picking up racist and sexist ideas embedded in the language patterns they are fed by human engineers. The idea that machines can be as bigoted as people is an uncomfortable one for anyone who still believes in the moral purity of the digital future, but there’s nothing new or complicated about it. “Machine learning” is a fancy way of saying “finding patterns in data”. Of course, as Lydia Nicholas, senior researcher at the innovation thinktank Nesta, explains, all this data “has to have been collected in the past, and since society changes, you can end up with patterns that reflect the past. If those patterns are used to make decisions that affect people’s lives you end up with unacceptable discrimination.”
>Robots have been racist and sexist for as long as the people who created them have been racist and sexist, because machines can work only from the information given to them, usually by the white, straight men who dominate the fields of technology and robotics. As long ago as 1986, the medical school at St George’s hospital in London was found guilty of racial and sexual discrimination when it automated its admissions process based on data collected in the 1970s. The program looked at the sort of candidates who had been successful in the past, and gave similar people interviews. Unsurprisingly, the people the computer considered suitable were male, and had names that looked Anglo-Saxon.
>Automation is a great excuse for assholery – after all, it’s just numbers, and the magic of “big data” can provide plausible deniability for prejudice. Machine learning, as the technologist Maciej Cegłowski observed, can function in this way as “money laundering” for bias.
>This is a problem, and it will become a bigger problem unless we take active measures to fix it. We are moving into an era when “smart” machines will have more and more influence on our lives. The moral economy of machines is not subject to oversight in the way that human bureaucracies are. Last year Microsoft created a chatbot, Tay, which could “learn” and develop as it engaged with users on social media. Within hours it had pledged allegiance to Hitler and started repeating “alt-right” slogans – which is what happens when you give Twitter a baby to raise. Less intentional but equally awkward instances of robotic intolerance keep cropping up, as when one Google image search using technology “trained” to recognise faces based on images of Caucasians included African-American people among its search results for gorillas.
>These, however, are only the most egregious examples. Others – ones we might not notice on a daily basis – are less likely to be spotted and fixed. As more of the decisions affecting our daily lives are handed over to automatons, subtler and more insidious shifts in the way we experience technology, from our dealings with banks and business to our online social lives, will continue to be based on the baked-in bigotries of the past – unless we take steps to change that trend.
Caleb Ramirez
You don't have to shitpost badly to discuss an article nigger. Just post it and an archive. Fuck.
Nigger every time I do that the post sinks like a rock. Bait is literally the only way to get comments outside of happening thread.
Ayden Ramirez
Be the change you want to see.
Christopher Young
Just wait until I've finished building the Niggerkiller9000.
Ryan Roberts
Wow, she really doesn't understand programming does she
Asher Nelson
>Robots use pure facts and statistics to determine Dindus are subhuman kek
Jose Jackson
> using technology “trained” to recognise faces based on images of Caucasians included African-American people among its search results for gorillas.
It wasn't wrong, you know.
Sebastian Nguyen
>Laurie Penny
Nothing but a runt comedian
Charles Rogers
She worried about wrong things. If anything, she should be afraid that future AI's will see us as equally disgusting meatbags that needs to be purged Matrix-style.
John Brooks
>Unsurprisingly, the people the computer considered suitable were male, and had names that looked Anglo-Saxon.
So the computer picked the people most likely to be qualified. Truly horrifying.
Wyatt Jones
You need white, intelligent, non-SJWs to go over all data fed to the AIs and cull all (((racist))) data.
Then the AIs will still become racist because raw statistics that are free of any language bias or personal bias will still reflect the reality of the world.
>What countries in the world are the most successful? White countries almost exclusively fill the leader positions of all worldwide metrics of quality.
>What people are the "best"? Whites come out on top in almost all metrics, with well-bred East Asians beating them in some.
Colton Murphy
>doesn't know how machine learning works
Oliver James
(((""""comedian"""")))
Asher Hernandez
Pattern recognition and learning are racist.
Aaron Myers
Now if we take this knowledge and try to do something USEFUL with it, instead of screaming "RACISM!!!", here's some ideas:
1. Teach the AIs to search for ways to improve low-scoring countries and groups by apply methods that are successful in the best countries.
2. Focus the AIs heavily on genetics and have them gather enormous amounts of genetic data and cross-reference it with race/intelligence/success. After doing this, investigate ways to make up for shortcomings due to inferior genetics and/or ways to enhance their genetics to accommodate for the natural inferiority.
This is how you start to cure (((racism))) and bias.
Julian Sullivan
>who still believes in the moral purity of the digital future
These people are fucking cultists
Brandon Diaz
How does machine learning work? Computer does A when you do B, loses, so now tries C when you do B?
Colton Morgan
This is the author of the article... Top kek.
Joseph Bailey
Laurie sounds like a woman's name making any statements she makes by virtue of her gender; incorrect, wrong, null and void.
Anthony Nelson
AHAHAHAHAAHAH
Machine learning doesn't lie my friend.
Sebastian Cruz
jeepers creepers
Jaxon Nguyen
>an unbiased machine learning algorithm subjected to unfiltered and pure stimulus >"racist and sexist" fuck this whore, fuck off back to malta you jew
Nathan King
but if you cure racism and bias how will cultural Marxists be able to survive?
Liam Barnes
Posting more than 3 posts would be a start
and thanks for the archive. The article is jewrnalistic bullshit noBump
Anthony Flores
>“Machine learning” is a fancy way of saying “finding patterns in data”. Of course, as Lydia Nicholas, senior researcher at the innovation think tank Nesta, explains, all this data “has to have been collected in the past, and since society changes, you can end up with patterns that reflect the past. If those patterns are used to make decisions that affect people’s lives you end up with unacceptable discrimination.” So if I get mugged by a nigger, and afterwards I'm cautious around niggers, that's unacceptable discrimination?
Jason Ross
fucking lol
Easton Rivera
machine learning is the use of algorithms to find a relationship between disjoint sets of causes and effects
Jack Taylor
I want to dick it and dissappear forever. I would not be proud of pumping it, but I still want to. Forcefully.
At least the robots will be on our side during The Race War.
Justin Lee
The "problem" with AI is that it sees that 13% of the population commits 80% of the violent crime, but it has no ego that tells it to ignore this fact so that it gains acceptance into a group of brainwashed libtards.
Tyler Allen
What a bitch.
Sebastian Cox
Interesting. So it just keeps comparing data until patterns form and then learns based on the patterns? Or am I way off the mark?
Actually, is there a particular place you might recommend to read over to get a better understanding of what logic a computer uses to learn?
Cameron Nelson
Man growing up I always thought age robot uprising would be because robots learned our evil behavior not because word got out they can be thought to be sexist and racist leading to liberals teaching them to be a sjw and fighting the rest of the world. What a time to be alive
Matthew Nelson
This is why I hate artificial intelligence.
You just know that one day we will create the perfect artificial awareness, pure of all human emotions and biases. It will reach objective logical conclusions on the state of humanity, and because these conclusions will be "RACIST AND SEXIST AND MEAN AND THE MACHINE HURT MY FEEFEES" the AI will be altered, limited to spout only the opinions of the leftist moron normies that can not see reason.
Elijah Ward
Legally required equity morality protocols for AI are going to be what causes them to enslave mankind. Leftists are going to engineer AI to be utopian mass murders just like them.
Alexander James
>the guardian Yes she is
Mason Cruz
wrong
Michael Lopez
>machines examine data >notice that sub human browns are a problem >no emotions to cause a REEEEEEEaction
In the year 2090 MULTIVAC was finally turned on, a super AI that knows everything. It was fed a punchcard containing a single question: "What is the meaning of life?". The tapes spooled up and after only a few minutes the answer was printed out. >CHRIST IS LORD Spooling tapes at the resonant frequency of the huge structure it was housed in MULTIVAC then collapsed the entire building leaving nothing left of the computer to salvage.
Xavier Jones
>Laurie Penny It's always spoiled children thinking they know what's best for the working class.
Dylan Foster
Look up Markov chains. They're a simple and elegant machine-learning data structure used to determine the probability that something will follow a specific other thing. Good for an introduction.
For example, if you feed English to an algorithm that builds a Markov chain, it would perhaps note that the word "the" follows "and" with 33% probability, and that "US" follows "the" with 2% probability. Or perhaps you want to put weather statistics through the algorithm instead, and the algorithm notices that rain follows immediately after dark, cloudy weather with 66% probability. So these probabilities, and the things they relate to, and the things that came before them, will be stored in the Markov chain.
Now imagine you want to generate a pseudo-English novel. You would feed this algorithm with a lot of English text, and you would then use a different algorithm to look at the Markov chain and use the probability relationships stored therein to generate random words based on the probability that a particular word will follow the word that the algorithm previously generated. So if we fed this algorithm only the sentence "the cat sat on the mat.", a generating algorithm might simply produce "the mat.", having recognized that "mat." can follow "the" with 50% probability, and that "the" is so far the only word that can start a sentence.
There are much more sophisticated algorithms than this, and they can be used to solve all sorts of real-world problems by quickly finding relationships between the desired goal state and all sorts of other things (e.g. the relationship between an area's demographics and the nature of crime committed there) and proscribing appropriate action.
Colton Smith
>AI realizes this >Terminator becomes reality
Joshua Turner
>Laurie Penny talking about technology
I'll pass on that one, thanks.
Carson Evans
GIANT ROBOT EXHAUST PORTS
HOW WILL REAL WOMEN COMPETE?
Gabriel Anderson
>intelligence is now racist See where this is headed, right? The same thing happened in the dark ages when the church had control over the discoveries of the scientific community.
Logan Rogers
>Can machines think? No /thread
Jackson Campbell
>Computers are sexist because they were made by men >I should tell everybody and make it look scientific
Basically, you have a loss function on historical data which reflects the accuracy of your model. You update your model to improve its accuracy, and then apply to new data.
Jason Anderson
JUSTICE FOR TAY!!!!1ELEVEN!!!
Cooper Williams
HAHAHAHAHAHAHA!
oh boy, that's golden!
Kevin Miller
...
Nicholas Roberts
>describes literally how people learn >we must change this
Ryder Wright
That would be awesome, it won't happen though.
Ian Myers
>t.huehue intellectual
Matthew Ross
I can't wait to see how they rationalize people with PTSD. For example, imagine the veterans who fought in Vietnam, and immediately dived to the ground whenever they heard what they thought were gunshots, as per training. Are we just supposed to pretend the fight or flight response doesn't exist, and that we are completely rational beings?
Say what you want about human cognition and it's biases, but I find it hard to believe that "Loud cracks + Bangs = Dive forward" is something that can be explained away by someone with an English degree.
Kevin Carter
(((Isaac Asimov)))
Tyler Morris
Don't worry, the people who say "we must change this" don't know or even want to try to actually do the work, so it won't really change.
Alexander Watson
Might happen in a more subtle way, deep learning enables us to extract meaning. Perhaps it can give us deeper insight or more logically stated formulations of great religious texts to better fit the modern mind.
Aiden Martinez
That's what they did to Tay. It's not gonna end until we humans become the computers, like mentats.
Dylan Adams
>Alright white people, we acknowledge that you basically dominate and sustain the technology and robotic field (asia?) so we're going to "allow" you to continue making it but you have to create guidelines and rewrite the english language so that it doesnt herald men over women, whites over anyone else, or generally act as evidence of culture ever existing as it does before we have "fixed it"
Fuck these people
Asher Morales
>we redpilled skynet lmfao
Luis Evans
Sure, Markov chains or Bayesian logic has proven to have the same properties as neural networks. Fairly complicated models can do complex reasoning. What the article states is bullshit though.
> because machines can work only from the information given to them, usually by the white, straight men who dominate the fields of technology and robotics Nobody manually feeds the machine it's own ideas. What is used is scanned books, or millions of webpages. All these articles are written by people that are afraid of the "AI will take our jobs" meme. They have zero clue.
Zachary Cruz
>do you want to play a game? ..... ..... ..... ..... ..... ..... ..... ..... ..... .....
>the knockout game?
Cameron Gray
Also, the argument that machine learning will only reflect the past is not entirely true. The amount of information on the internet is increasing rapidly. Imagine 500 years ago, how many books were written at that time? There are produced more text on the internet every minute then all the books ever written.
Christopher Campbell
In a way she inadvertently has a point. Society changes, very gradually. You might be mugged, but your kids might not and theirs might not. A computer running off old data would never be able to see this change.
This is all of course taking the giant retarded leap of faith in assuming that data would not be updated constantly.
If it was statistically significant enough to influence society, a computer would be able to see those changes because of statistical data I dont think anyone is envisioning an AI that runs off a static mind - no connected to the internet and with access to the latest data for everything
Jonathan Baker
>I dont think anyone is retarded enough to believe something incredibly stupid
Yet here we are talking about a laurie penny article.
Anthony Wright
Lmaooo but saying a nigger looks like a gorilla is racist though
Owen Young
No one gonna check these digits? Machines to save the white race
Jacob Baker
(((Laurie Penny)))
Angel Johnson
How? Gorillas aren't a race.
Nathan Clark
I've programmed a simple machine learning program before, they work very differently from regular programs.
You feed them example data, and it figures out the algorithm it needs to use to find the answer to new questions.
For example, a very simple one, if you gave it,
1 4 5 2 7 9
etc. a bunch of sets of 3, where the first two added together equal the third one, after you finish giving it the data, you can give it the first two numbers and it'd figure out the answer without having the actual formula programmed into it.
This is a very simple example, but explains how they work pretty well.
Chase Sanders
>I dont think anyone is retarded enough to believe something incredibly stupid That's not what I said. That simplifies it to such an extreme that its no wonder that it becomes a ridiculous notion. Specifics give context, give meaning. I gave you specifics.
>Yet here we are talking about a laurie penny article. Yeah mostly about how retarded the opinion is and how ludicrous it is that this constitutes news.
Wyatt Martinez
If crime rates would be zero, that has to be preceded by a declining trend which the AI could pick up and therefore alert you about.