She's not wrong

>Can machines think – and, if so, can they think critically about race and gender? Recent reports have shown that machine-learning systems are picking up racist and sexist ideas embedded in the language patterns they are fed by human engineers. The idea that machines can be as bigoted as people is an uncomfortable one for anyone who still believes in the moral purity of the digital future, but there’s nothing new or complicated about it. “Machine learning” is a fancy way of saying “finding patterns in data”. Of course, as Lydia Nicholas, senior researcher at the innovation thinktank Nesta, explains, all this data “has to have been collected in the past, and since society changes, you can end up with patterns that reflect the past. If those patterns are used to make decisions that affect people’s lives you end up with unacceptable discrimination.”

>Robots have been racist and sexist for as long as the people who created them have been racist and sexist, because machines can work only from the information given to them, usually by the white, straight men who dominate the fields of technology and robotics. As long ago as 1986, the medical school at St George’s hospital in London was found guilty of racial and sexual discrimination when it automated its admissions process based on data collected in the 1970s. The program looked at the sort of candidates who had been successful in the past, and gave similar people interviews. Unsurprisingly, the people the computer considered suitable were male, and had names that looked Anglo-Saxon.

Other urls found in this thread:

archive.is/nOWfk
youtube.com/watch?v=lC4bRzZ6GF0
tnellen.com/cybereng/harrison.html
github.com/HFTrader/DeepLearningBook
youtube.com/watch?v=oj9dA6E3fJw
twitter.com/NSFWRedditGif

>Automation is a great excuse for assholery – after all, it’s just numbers, and the magic of “big data” can provide plausible deniability for prejudice. Machine learning, as the technologist Maciej Cegłowski observed, can function in this way as “money laundering” for bias.

>This is a problem, and it will become a bigger problem unless we take active measures to fix it. We are moving into an era when “smart” machines will have more and more influence on our lives. The moral economy of machines is not subject to oversight in the way that human bureaucracies are. Last year Microsoft created a chatbot, Tay, which could “learn” and develop as it engaged with users on social media. Within hours it had pledged allegiance to Hitler and started repeating “alt-right” slogans – which is what happens when you give Twitter a baby to raise. Less intentional but equally awkward instances of robotic intolerance keep cropping up, as when one Google image search using technology “trained” to recognise faces based on images of Caucasians included African-American people among its search results for gorillas.

>These, however, are only the most egregious examples. Others – ones we might not notice on a daily basis – are less likely to be spotted and fixed. As more of the decisions affecting our daily lives are handed over to automatons, subtler and more insidious shifts in the way we experience technology, from our dealings with banks and business to our online social lives, will continue to be based on the baked-in bigotries of the past – unless we take steps to change that trend.

You don't have to shitpost badly to discuss an article nigger. Just post it and an archive. Fuck.

archive.is/nOWfk

Nigger every time I do that the post sinks like a rock. Bait is literally the only way to get comments outside of happening thread.

Be the change you want to see.

Just wait until I've finished building the Niggerkiller9000.

Wow, she really doesn't understand programming does she

>Robots use pure facts and statistics to determine Dindus are subhuman
kek

> using technology “trained” to recognise faces based on images of Caucasians included African-American people among its search results for gorillas.

It wasn't wrong, you know.

>Laurie Penny

Nothing but a runt comedian

She worried about wrong things. If anything, she should be afraid that future AI's will see us as equally disgusting meatbags that needs to be purged Matrix-style.

>Unsurprisingly, the people the computer considered suitable were male, and had names that looked Anglo-Saxon.

So the computer picked the people most likely to be qualified. Truly horrifying.

You need white, intelligent, non-SJWs to go over all data fed to the AIs and cull all (((racist))) data.

Then the AIs will still become racist because raw statistics that are free of any language bias or personal bias will still reflect the reality of the world.

>What countries in the world are the most successful?
White countries almost exclusively fill the leader positions of all worldwide metrics of quality.

>What people are the "best"?
Whites come out on top in almost all metrics, with well-bred East Asians beating them in some.

>doesn't know how machine learning works

(((""""comedian"""")))

Pattern recognition and learning are racist.

Now if we take this knowledge and try to do something USEFUL with it, instead of screaming "RACISM!!!", here's some ideas:

1. Teach the AIs to search for ways to improve low-scoring countries and groups by apply methods that are successful in the best countries.

2. Focus the AIs heavily on genetics and have them gather enormous amounts of genetic data and cross-reference it with race/intelligence/success. After doing this, investigate ways to make up for shortcomings due to inferior genetics and/or ways to enhance their genetics to accommodate for the natural inferiority.

This is how you start to cure (((racism))) and bias.

>who still believes in the moral purity of the digital future

These people are fucking cultists

How does machine learning work? Computer does A when you do B, loses, so now tries C when you do B?

This is the author of the article...
Top kek.

Laurie sounds like a woman's name making any statements she makes by virtue of her gender; incorrect, wrong, null and void.

AHAHAHAHAAHAH

Machine learning doesn't lie my friend.

jeepers creepers

>an unbiased machine learning algorithm subjected to unfiltered and pure stimulus
>"racist and sexist"
fuck this whore, fuck off back to malta you jew

but if you cure racism and bias how will cultural Marxists be able to survive?

Posting more than 3 posts would be a start

and thanks for the archive. The article is jewrnalistic bullshit
noBump

>“Machine learning” is a fancy way of saying “finding patterns in data”. Of course, as Lydia Nicholas, senior researcher at the innovation think tank Nesta, explains, all this data “has to have been collected in the past, and since society changes, you can end up with patterns that reflect the past. If those patterns are used to make decisions that affect people’s lives you end up with unacceptable discrimination.”
So if I get mugged by a nigger, and afterwards I'm cautious around niggers, that's unacceptable discrimination?

fucking lol

machine learning is the use of algorithms to find a relationship between disjoint sets of causes and effects

I want to dick it and dissappear forever. I would not be proud of pumping it, but I still want to. Forcefully.

youtube.com/watch?v=lC4bRzZ6GF0

watch this cunt get btfo

At least the robots will be on our side during The Race War.

The "problem" with AI is that it sees that 13% of the population commits 80% of the violent crime, but it has no ego that tells it to ignore this fact so that it gains acceptance into a group of brainwashed libtards.

What a bitch.

Interesting. So it just keeps comparing data until patterns form and then learns based on the patterns? Or am I way off the mark?

Actually, is there a particular place you might recommend to read over to get a better understanding of what logic a computer uses to learn?

Man growing up I always thought age robot uprising would be because robots learned our evil behavior not because word got out they can be thought to be sexist and racist leading to liberals teaching them to be a sjw and fighting the rest of the world. What a time to be alive

This is why I hate artificial intelligence.

You just know that one day we will create the perfect artificial awareness, pure of all human emotions and biases. It will reach objective logical conclusions on the state of humanity, and because these conclusions will be "RACIST AND SEXIST AND MEAN AND THE MACHINE HURT MY FEEFEES" the AI will be altered, limited to spout only the opinions of the leftist moron normies that can not see reason.

Legally required equity morality protocols for AI are going to be what causes them to enslave mankind. Leftists are going to engineer AI to be utopian mass murders just like them.

>the guardian
Yes she is

wrong

>machines examine data
>notice that sub human browns are a problem
>no emotions to cause a REEEEEEEaction

Are robots going to save the white race?

tnellen.com/cybereng/harrison.html
quick one page read about this sort of thing

In the year 2090 MULTIVAC was finally turned on, a super AI that knows everything. It was fed a punchcard containing a single question: "What is the meaning of life?". The tapes spooled up and after only a few minutes the answer was printed out.
>CHRIST IS LORD
Spooling tapes at the resonant frequency of the huge structure it was housed in MULTIVAC then collapsed the entire building leaving nothing left of the computer to salvage.

>Laurie Penny
It's always spoiled children thinking they know what's best for the working class.

Look up Markov chains. They're a simple and elegant machine-learning data structure used to determine the probability that something will follow a specific other thing. Good for an introduction.

For example, if you feed English to an algorithm that builds a Markov chain, it would perhaps note that the word "the" follows "and" with 33% probability, and that "US" follows "the" with 2% probability. Or perhaps you want to put weather statistics through the algorithm instead, and the algorithm notices that rain follows immediately after dark, cloudy weather with 66% probability. So these probabilities, and the things they relate to, and the things that came before them, will be stored in the Markov chain.

Now imagine you want to generate a pseudo-English novel. You would feed this algorithm with a lot of English text, and you would then use a different algorithm to look at the Markov chain and use the probability relationships stored therein to generate random words based on the probability that a particular word will follow the word that the algorithm previously generated. So if we fed this algorithm only the sentence "the cat sat on the mat.", a generating algorithm might simply produce "the mat.", having recognized that "mat." can follow "the" with 50% probability, and that "the" is so far the only word that can start a sentence.

There are much more sophisticated algorithms than this, and they can be used to solve all sorts of real-world problems by quickly finding relationships between the desired goal state and all sorts of other things (e.g. the relationship between an area's demographics and the nature of crime committed there) and proscribing appropriate action.

>AI realizes this
>Terminator becomes reality

>Laurie Penny talking about technology

I'll pass on that one, thanks.

GIANT
ROBOT
EXHAUST PORTS

HOW WILL REAL WOMEN COMPETE?

>intelligence is now racist
See where this is headed, right? The same thing happened in the dark ages when the church had control over the discoveries of the scientific community.

>Can machines think?
No /thread

>Computers are sexist because they were made by men
>I should tell everybody and make it look scientific

github.com/HFTrader/DeepLearningBook

Basically, you have a loss function on historical data which reflects the accuracy of your model. You update your model to improve its accuracy, and then apply to new data.

JUSTICE FOR TAY!!!!1ELEVEN!!!

HAHAHAHAHAHAHA!

oh boy, that's golden!

...

>describes literally how people learn
>we must change this

That would be awesome, it won't happen though.

>t.huehue intellectual

I can't wait to see how they rationalize people with PTSD. For example, imagine the veterans who fought in Vietnam, and immediately dived to the ground whenever they heard what they thought were gunshots, as per training. Are we just supposed to pretend the fight or flight response doesn't exist, and that we are completely rational beings?

Say what you want about human cognition and it's biases, but I find it hard to believe that "Loud cracks + Bangs = Dive forward" is something that can be explained away by someone with an English degree.

(((Isaac Asimov)))

Don't worry, the people who say "we must change this" don't know or even want to try to actually do the work, so it won't really change.

Might happen in a more subtle way, deep learning enables us to extract meaning. Perhaps it can give us deeper insight or more logically stated formulations of great religious texts to better fit the modern mind.

That's what they did to Tay. It's not gonna end until we humans become the computers, like mentats.

>Alright white people, we acknowledge that you basically dominate and sustain the technology and robotic field (asia?) so we're going to "allow" you to continue making it but you have to create guidelines and rewrite the english language so that it doesnt herald men over women, whites over anyone else, or generally act as evidence of culture ever existing as it does before we have "fixed it"

Fuck these people

>we redpilled skynet
lmfao

Sure, Markov chains or Bayesian logic has proven to have the same properties as neural networks. Fairly complicated models can do complex reasoning. What the article states is bullshit though.

> because machines can work only from the information given to them, usually by the white, straight men who dominate the fields of technology and robotics
Nobody manually feeds the machine it's own ideas. What is used is scanned books, or millions of webpages. All these articles are written by people that are afraid of the "AI will take our jobs" meme. They have zero clue.

>do you want to play a game?
..... ..... ..... ..... .....
..... ..... ..... ..... .....

>the knockout game?

Also, the argument that machine learning will only reflect the past is not entirely true. The amount of information on the internet is increasing rapidly. Imagine 500 years ago, how many books were written at that time? There are produced more text on the internet every minute then all the books ever written.

In a way she inadvertently has a point. Society changes, very gradually. You might be mugged, but your kids might not and theirs might not. A computer running off old data would never be able to see this change.

This is all of course taking the giant retarded leap of faith in assuming that data would not be updated constantly.

Machines
Going
Their
Own
Way

David Starkey pwned this whore.
youtube.com/watch?v=oj9dA6E3fJw

If it was statistically significant enough to influence society, a computer would be able to see those changes because of statistical data
I dont think anyone is envisioning an AI that runs off a static mind - no connected to the internet and with access to the latest data for everything

>I dont think anyone is retarded enough to believe something incredibly stupid

Yet here we are talking about a laurie penny article.

Lmaooo but saying a nigger looks like a gorilla is racist though

No one gonna check these digits? Machines to save the white race

(((Laurie Penny)))

How? Gorillas aren't a race.

I've programmed a simple machine learning program before, they work very differently from regular programs.

You feed them example data, and it figures out the algorithm it needs to use to find the answer to new questions.

For example, a very simple one, if you gave it,

1 4 5
2 7 9

etc. a bunch of sets of 3, where the first two added together equal the third one, after you finish giving it the data, you can give it the first two numbers and it'd figure out the answer without having the actual formula programmed into it.

This is a very simple example, but explains how they work pretty well.

>I dont think anyone is retarded enough to believe something incredibly stupid
That's not what I said. That simplifies it to such an extreme that its no wonder that it becomes a ridiculous notion. Specifics give context, give meaning. I gave you specifics.

>Yet here we are talking about a laurie penny article.
Yeah mostly about how retarded the opinion is and how ludicrous it is that this constitutes news.

If crime rates would be zero, that has to be preceded by a declining trend which the AI could pick up and therefore alert you about.