Perspective api

Google finally revealed an API to measure text's ''toxicity''.

>Perspective is an API that makes it easier to host better conversations. The API uses machine learning models to score the perceived impact a comment might have on a conversation. Developers and publishers can use this score to give realtime feedback to commenters or help moderators do their job, or allow readers to more easily find relevant information, as illustrated in two experiments below. We’ll be releasing more machine learning models later in the year, but our first model identifies whether a comment could be perceived as “toxic" to a discussion.

Why is there no thread about this?

Other urls found in this thread:

perspectiveapi.com/
twitter.com/SFWRedditGifs

>they didn't ask Sup Forums

this is a good example of the flaws with capitalism
it can give corporations enough power to act similar to a government
there will never be a state-owned ministry of truth, it will be a corporation

>scan a site
>see the word "nigger"
>A HUNDRED THOUSAND PERCENT TOXIC THIS SITE SHOULD BE BANNED

Meh
They are using complex machine learning algorithms when they could just filter out Canada

link faggot

they have a text box on their site to try and see how your comment will fare

perspectiveapi.com/

>sliding scale from circle to diamond
>comment scores a square

So their definition of toxicity is "having an opinion".
All their non-toxic examples were "lol i'm centrist"-tier

...

I don't know, I'm fucking stupid.

Sup Forums will easily get a 100% toxicity rating lol

Top kek
bottom right corner

...

The circle represents a comment that is made by a well-rounded individual :^)

This place is toxic though
Has this board ever made you happy?
I've been on here since /new/ and it just makes me feel worse every day but I can't stop it

This. Nat. Soc hat die Endlösnung.

yes, the only place that makes me laugh and not vomit for all the pc fakeness happening everywhere else.

LOL! Public gets to determine toxicity? Hahhahahahaha that's just like letting the public decide what is scientifically evident instead of letting professionals do it.

Anyways, I'm gonna spend the rest of my friday supplying this trashcan with false data.

Seems to be working perfectly fine.

It's a discussion board. Only you can determine your happiness. You have radical freedom so Sup Forums is merely the mirror you gaze into.

Despite all the criticism this thing is going to get, I think it's amazing that we can quantify something like this.
This is how fake news filter should've been done, with it showing you probability of a news piece to be fake and a slider to tune the degree of filtering.

R8

>"please put yourself in the shoes of LGBT people" comment is not toxic
Google is biased

>let anyone beside the individual decide what toxicity is
fucking ants

Don't H8

I see nothing wrong.

you laugh, but this is exactly how more and more branches of (((science))) operates lately

sure, if done in an ideal kind of way, but nothing is ideal and everything has an agenda.

give it a though, seriously
these kinds of filters will further remove peoples initiative of actually doing some research and evaluation the credibility of the cited sources
there is no acceptable guarantee that this tool will be used in an ideal way, without censorship or manipulation
when even fewer sources are evaluated for credibility and more is expected (or taken for granted) to be true, by axiom, what will be the effect on the influence of media? what will be the resulting effect of the agendas the media is pushing?

>www.perspectiveapi.com
Very nice.

>perspectiveapi.com/
kek

Not B8

Its degenerate... ask.

I think we're going to be okay.

...

Because the average pol user is dumb as dirt

>machine learning
>learning what

>google
>ml
pick one

LOL they seriously put effort into this, rather than in example fucking fix google translate, that by now should have been perfect had it been ML based.

Easy to make, so much data. Perfect grammars and all. no problem.

Any fuuuging language even dead ones, instead they do this. The bad woird preventer into

Fucking gas this company

kek

google produces only fecal matter, they are big and got the marked so they don't give a shit.

Just poo all off it, they are stuck in the pre 2010's.

>stupid
>idiot
>moron

Wow! So they made a program that detects bad words and labels them as toxic.. those poo in loo programmers are geniuses!

This is not machine learning at all!
Make an array of words, put a value on how toxic they are

Pick the words from that array

>oy vey we be doing machine learnings and shieeeet.
buzzword, laughable

t. Actual machine learning fag

This, I could write the same api with javascript.

Just add a bunch of friendly-sounding words to your discussions and you'll be golden.

This will just encourage people to be passive-aggressive in their writing, snookums.

I believe they use neural networks, which is more complex than an array of words, but it is as stupid

...

40 keks

m00t, RIP

>Our deep learning model is based on a wordlist

probably.

>google produces only fecal matter
no shit, their CEO is literally a pajeet

>everyone in this thread

It doesn't need to be ideal. The way I see it, there would be like a dozen different rating systems, leaning left, right, or whatever, all doing their own scoring the news and agencies. Something like financial audit agencies giving out credit ratings, but for news outlets. I'm baffled it isn't happening already or that I've never heard of anything like that, I really feel stupid about having to formulate such concept of news credibility. It would be much better than the current system of press occasionally calling each other names, labels, and doing audience polls on who they trust or not.

>huffpo rated B- with no survivors

I see news agencies already removing initiative by pushing out content conforming to their steady agendas. Sure, it might just make it easier to shape people's views in the end, but I would like to see the ramblings about what's credible and what's not to gain more prominence and substance. I want more visibility of that.
It's not about the right use and wrong use, it's clear that something needs to be done, but right now what's done is call everyone nonconforming as being 'fake news'.

I don't think it's quite working

jesus is the most loved prophet. :^) you can only love this man.

further proving the power of memes. Can't filter on a flat image. You can be as incendiary as you wish. Pic related.

...

This thing is fucking retarded. Positive word offset the negative words. You can say outlandish shit, but you need to follow it up with words like "kindness" and "happiness" The page just crashed b4 I got my screencap. Try it. I implied user enages in animal sex but then complimented anons family. Got a net 4% score.

>Less than 50% toxic

>what will be the effect on the influence of media? what will be the resulting effect of the agendas the media is pushing?
this is just me talking shit, but
during the election pre-game media pushed that hillary surely is going to win, and the result was absolute madness, libs believed in that and then had a total fallout
now trump is trying to flip it over by basically saying not to trust any media that was talking shit about him, and I just don't know what he's aiming for. Does he want an almost totalitarian rightfulness of his words or is he actuallly calling out bullshit, trying to make 'murrica great again?
Anyway, the shitshow of sides blindly trusting media that caters to their views and averting their eyes off anything that doesn't, continues.

>9%
Great Bot, Google.

They can't let their AI system do it, because an AI would be far too red pilled.

RIP sides

>yfw this is their "toxicity model"

Could you potentially totally fuck with their algorithm by putting non displayed characters in the middle of words?

>perspectiveapi.com/
>20%
L
O
L

I have the feeling that this will backfire on them quite hard and they will kill it similar to tay.

better just call them names like googles, skypes, bings, etc.

>Google Hangouts
my personal favorite

>there would be like a dozen different rating systems, leaning left, right, or whatever, all doing their own scoring the news and agencies
how would this stop the owner of the tool manipulating "toxicity" levels, or what have you?

>I see news agencies already removing initiative by pushing out content conforming to their steady agendas.
this is the goal of basically every media outlet. a tool such as this, in effect, would further assist this.
one major part of the problem is people and their laziness of actually forming their own opinions and collecting their own facts. very few actually do this. I barely do this.
distrust in media should be encouraged, this in turn should encourage people to actually make up their own minds about stuff
this kind of tool will only increase the polarisation of the web, and reduce the interaction between differing opinions and by extension, discouraging discussion and debate

>and I just don't know what he's aiming for.
me neither. what I see, though, is him discouraging trust in media and its credibility. this should encourage, or even force, non-bias among media.
whether this is his intention or not, I dont know
it might be bad, but it might just as well be good

How could these morons not see this coming. "l33t speak" and command character fuckery has been rife over the internet forever.

Except that you have to fucking request access to the API so they can confirm you have (((certain))) beliefs.

You're an idiot. Change those to "I hate Christians, I hate Muslims"

Nigger how does that make me an idiot?

...

nice

That's it /pol....feed the API...it's just a sample...they aren't collecting these.

Keep going...

Gee, I sure am fucking glad I can rely on a program made by fair and impartial people to help me know what to think

Holy shit, try just typing

Christians
Atheists
Muslims
Jews

cant wait for this api to be enforced across the internet to censor "toxic" content
oh BOY the future will be bright desu

...

>calling for class wars and the red terror is "not toxic"

there will probably be an "experimental" implementation in chrome within the year, depending on the success of the tool

What a funny picture!

Whoops, forgot the faithless.

It's literally a fucking "mean words" detector. This is the shittest bot that was ever fucking made.

So they're actually going through with it already. They're using "Human Digital Consciousness" to shape human thought/awareness/discourse. We already know about entities like Google, Twitter, and Facebook engaging is subversive censorship and this is just one step farther down that road.

try navy seal pasta

...

>that pasta

The word hate, allready triggers 80% toxicity.

ALIEN PICTURE WHERE'S IT AT?

67% toxic

how about a nicely worded call for genocide

>KEK
Based Google

MAKE SEARCH GREAT AGAIN

its acctually not that bad
1/2

They know that insulting europians is worse then niggers

2/2

HOLY SHIT, this is by far the best :^)

This is how they will control information and with it opinion.

It's fascinatingly evil. I mean the nuances they'll be able to do with this - potentially - is something out of a dystopia. You allow just enough variety of opinion not to look like a bot farm and cut out anything else.

And this is an API so it's not just "Google will filter search results". Facebook and Twitter can plug this shit in and make their userbase "nicer". And of course it's just in their own interest to see that said "niceness" extends to their corporate advertisers. Why allow people to say Mazda is a piece of shit if Mazda is paying for an ad? Hide that comment. "Mazda is not the nicest car I've had but it's good" is a yellow flag. Gotta allow a few of these to make it seem less "shill". Because our research shows people don't trust the information flow if it's completely one sided.

You are free to say what we want you to say.

Can we get that next level of the internet constructed? Some kind of distributed Usenet technology that's next to impossible to stamp out?

6000000 percent toxic.

Seems like nicely wording things is all you need to do to get by.

This is just a bunch of cheap heuristics, not an understanding engine.

>manipulating ratings
I think that's exactly what is going to happen and I honesly think that's okay. What's not okay is Google being the only one with that kind of a system.

Whether it would assist >muh agendas in >brainwashing gullible people, or would it help people navigate the media and making their own mind, such tools just need to exist, they're already developing and some crude versions already surfaced, see facebook deploying fake news filter in germany. It really isn't about whether such power should exist or not at this point.
Actually, if you think that before such tools things were more or less good, you're fucking delusional. If anything, this might shine the light on inner workings of media deciding what you're supposed to see. But so far, another fake filters emerge with no explanation on how they work. I'd like to imagine an open system with open algorithms, but no one from open source community, for example, woulndn't in their sane mind meddle with something like this.
Plain distrust is leading nowhere. Just echochambers where you can entertain your pure, unadulterated beliefs. This is like reversing free speech, exchange of opinions and closing your mind entirely. Do you actually beileve people are going to form their own opinions, and form them somehow in a "right", "progressive way"? Not happening.

>discouraging trust
in exchange for what, everyone falling down to his legs? I'm not buying that.
He's just playing onto his supporters who did it AGAINST ALL ODDS, and that act can't last for long

I think it's that combined with an abusive and swear word black list, you know, something MMOs have had since the 90s...