Anyone know any actually reputable and truthful fucking news sources

Anyone know any actually reputable and truthful fucking news sources

Attached: fPX0Paf6B0PdX3M4TUQndr9SC72gziTdVI6RjRbmbmU.jpg (435x336, 23K)

Other urls found in this thread:

en.wikipedia.org/wiki/Automated_journalism
geektime.com/2017/01/23/no-google-translate-did-not-invent-its-own-language-called-interlingua/

The Washington Post

>reputable
>truthful
Pick one.

fox news

Alex Jones.

None of them are really that bad assuming you personally are capable of telling the difference between news and opinion pieces.

Facebook

This

Pew News

Your drunk uncle with the sick dog and too many gardening tools, yet an overgrown lawn.

Same

Can't work on the lawn when you're busy keeping up with the world

Thats not my uncle m8, that's my dad.

pol

if a human wrote the article its bias is what i've learned. best you can do is check multiple sources

This is terrible advice.

oh fuck its a useful reply thank you

Who the fuck else is going to write the article other than a human ffs?

The guardian

Attached: Screenshot_20191120-101636.jpg (1080x1920, 660K)

Deep down you know you don’t believe that

Attached: 34865066-8778-4778-AB39-CD7824A704B0.jpg (500x499, 42K)

Pornhub.com

why is that?

The Economist

en.wikipedia.org/wiki/Automated_journalism

Its called Sup Forums, that person probably thought you were talking about Pol Pot's newspaper which has a clear furry/MLP bias.

Possibly because that entire board is populated by edgelords that think saying “nigger” and pretending to be jew hating incel white supremacists is intellectually stimulating.

Is also a good one.

That algorithm needs a primary source of data, which must be written by a human being. Otherwise, the bot is just making shit up and is even more unreliable to an infinite degree.

here
Also keep in mind that the algorithm is designed to use language in a specific way as written by the programmer. So even then, if you theoretically had a bot journalist, it is still written by a human being

no offense but you have no idea how far bots have came

Here's all the news you need to know
America controls by force and conquering
China controls by coming in and cleaning up the mess and thereby creates debts for other nations that are fulfilled by exploiting the people and the lands of the countries they do this in
The fiat money system is rigged and is the primary indicator of how the world really works
Your government is spying and lying to you
You don't matter
Try to enjoy life and see all that you can and help all those that you can
Fuck the news

Friendly Jordies on youtube if you're into Australia's batshit insane politics.

Are you saying that bots have become sentient? Can a bot witness a real life event and come to any sort of conclusion about the circumstances? Can a bot, without any sort of external influence, come to a unique opinion about an actual human event that it cannot either witness or personally understand? A bot is a series of lines of code. It exists inside of computer hardware. It neither thinks nor feels a compulsion to do any sort of research. It is physically unable to experience anything other than what is deliberately given.

Do I have to make myself more clear?

>Are you saying that bots have become sentient?
what makes you think id read the rest lmao you're an idiot

Ok zoomer

that was a quick reply! thank you!

I’m glad you were able to read it. I was worried for you.

>That algorithm needs a primary source of data
Its not taken from a single source, that is the entire point, its compiled from a wide array of sources.
If a human wrote a unique article from a bunch of encyclopedia, literature, newspaper, and magazine sources, the new text would still be authored by the new human, it wouldn't be considered the work of the sources and the same applies to robot journalists whether it be something like a software program or more like a Joe Bernstein.

No its not, that is not how AI works at all, they learn to develop language, syntax, and context on their own through trial and error rather than direct guidance from a programmer or editor who may not even understand how the AI is drawing its conclusions or coming up with its syntax and may not even publish some of its news because it appears to be gibberish.

Wonder if this is a shit post? The man is a very obvious shill.

Reading it has made me more and more of an advocate for paid news.

The only way is through an aggregator with open comments and a community that isn't a circlejerk of some kind.

This gives you the widest reach for potential sources of information, and on-the-spot analysis, nuance unpacking and fact-checking, because none of us is as smart as all of us.

Presently, I don't know of one that fits the criteria, and TBH, I'm mentally better off not seeing the constant barrage of clownworld-ass insane shit.

this isn't reddit. Nobody's going to upvote you for spewing a stream of perjoratives that you think mean "says things I don't like." You're ignorant, belligerent, and proud of it, which makes you undeserving of respect.

If the AI uses multiple sources and “learns” how to use them, who decides how the AI learns? Who chooses or programs the bot to learn in any particular way? Let’s say that what you are saying is true: that through magical thinking, bots have an inexplicable method of self education. Wouldn’t that, in theory, also provide them a bias?

814462873
This works well for niche news such as hackernews. I dont know of anything that fits the criteria for more general news either.

Geez children

AI is always trained on a varied but focused data set. It doesn’t know much of anything but is currently capable of producing some types of rote news stories but not writing novel, longer stories. There is nothing magic or even that impressive about it, it’s not replacing human writers any time soon.

Wikipedia is not that bad.

>Are you saying that bots have become sentient?
No they don't subjectively internalize what they are writing, they just use math to find repeating patterns in millions of articles and regurgitate it in ways that mimic sentient thoughts.

>Can a bot witness a real life event and come to any sort of conclusion about the circumstances?
They can extrapolate patterns from millions of witnesses to the same real live event and compress all the perspectives into a single set of unifying observations and conclusions about all the known circumstances.

> Can a bot, without any sort of external influence, come to a unique opinion about an actual human event that it cannot either witness or personally understand?
They have a ton of external influence that is the entire point of AI, they can emotionlessly integrate all the external influences into a completely unique perspective based on the unifying patterns of the sum of external influences.

>It neither thinks nor feels a compulsion to do any sort of research.
Neither do most human journalists.

> It is physically unable to experience anything other than what is deliberately given.
AI journalists don't produce experience (journalists don't usually directly experience the things they are investigating either, they are secondary sources), they produce articles to mimic the experience of a large set of written accounts of an experience.

You have clearly demonstrated you don't know the first thing about artificial intelligence.

The main ducking point I’m making is the data is and always will be reliant on human fucking experience. Bots don’t fucking think. They just DO. Where do you think the data comes from? From nothing? The air? Outer space? Use your head.

Reuters

The Christian Science Monitor is pretty good.

Sup Forums
And then google those events.

go outside and wtch the world with your own eyes

AI didn’t create itself. Someone programs it. It is created by a human, and in turn, it’s programming resembles the intentions of the programmer. If there is such a thing as AI then it will, by necessity, carry the same biases and habits of the persons that created it.

I know where it comes from: news articles written by humans. Many of them, by many authors. And yeah, bias in the data makes bias in the results (resume sorting algos are famous for this).

This isn’t the same as instructing the AI on how to write, though. If we could do that we wouldn’t need training. It can be said to “learn” things that no human knew. But it isn’t magic.

>who decides how the AI learns?
The mathematical algorithms that the AI develops on its own based on the underlying mechanism of data processing.

>Wouldn’t that, in theory, also provide them a bias ?
Yes, of course, they are just regurgitation information, so if they find repeating patterns of similar bias in a lot of the data sources they will integrate that bias into their output.
A Neural Network specifically has billions of neurons and each neuron has a bias mathematically determined though parsing all the data sources to determines how much the bias of each neuron will contribute to the output.

Not really the intentions of the programmer so much as the goal inherent in the training data, which is a real difference. It’s not like writing a program, it’s a blind-ish process.

I never said it was magic. I used the term “magical thinking” to imply that the idea that computer programs have any sort of reliability to sort out human ideas in any sort of coherent or correct way is fantasy. When you read what a bot is saying, you are reading something that is less intelligent than a house cat. It just does something through an algorithm. A line of code. Nothing more.

i could really use a titty right now

>Someone programs it
No the programmer uses a self modifying mathematical framework to induce improvised sorting through large fluctuating data sets and has no idea how that framework will sort the input data to create novel outputs.

It doesn't carry the biases and syntax of the programmer it carries the biases and syntax of the input data.

The problem with AI journalism is that no NLP approaches really do very well extracting facts of any find from text or speech and AI has only limited info from other sources. Great for a story about how the Dow rose 23 points today but not much else; totally useless when trying to write about something like what happened in Hong Kong last night

Yes but the method in which the program learns is inherently decided by the programmer. Humans can learn through a variety of ways and, depending on the method, can come to a variety of conclusions. Which is why humans tend to disagree so often. Therefore, bots with a specific learning pattern will always be inherently biased.

BBC News

It is more than a line of code, but I agree less than a house cat. It can extract novel concepts and even separate / combine them into component vectors in some cases, so it’s more than what you describe, but still nothing to get too excited about ...

Who the fuck made the process of self modified mathematical algorithms? Dude it’s an easy idea to understand: someone made something to do a thing for a particular purpose. Can a hammer become a wrench?

Fuck, man.

They are certainly less capable than humans in a general sense

>you are reading something that is less intelligent than a house cat.
Yes in terms of generalized intelligence it will not as robust as a cat, but it will much more efficient at single specific specialized tasks than a house cat because all of its intelligence can be diverted to a single type of data and it doesn't have to process dozens of sensory organs or consider anything else other than the data set it is given.

Do you think that an “AI” can have the conversation that we are having right now? Without any sort of context from a data set, or understanding of language, or experience that was already made by a human being?

“AI” just regurgitates shit that a human has already written. Nothing more, nothing less.

No, humans disagree because they have different personalities.

By your logic everyone who went to the same church school should never disagree.

> bots with a specific learning pattern will always be inherently biased.
No, again ts not the type of machine learning that determines the bias, it is the input data.

lol

Attached: hannity1573461239910.jpg (864x960, 165K)

>Use Microsoft News App
>Be discerning
>Stop failing at life
???
>Profit

AI depends on human data but it’s not regurgitation. If it were no one would bother with it. There is an actual reason to use it, it can outperform any engineered solution for many problems.

It is, definitely, more than the sum of its inputs by virtue of clever design. But it’s easy to overhype and no, it can’t pass the Turing test on imageboards

Personalities drive a personal understanding of ones life. So, by virtue it drives the way in which a person learns how to interpret and interact with reality. Which affects the way in which a person learns how things operate, evolve, what is and is not true.

The method in which you learn is directly linked to what you think is true I.e. your bias.

But for a machine the same is true, because it must learn in the pattern it has been programmed to learn in. It cannot reprogram itself. It has been written is a specific way. Just the same way you are, in an odd sense.

AI training is literally a process of the model reprogramming itself

>Who the fuck made the process of self modified mathematical algorithms?
Harold Stephen Black is credited as the inventor of feedback loops which were some of the first self modifying codes.

>a thing for a particular purpose
No, they wanted to create something could do things for generalized purposes yet can adapt themselves to particular tasks.

There are hundreds of chatbots and the whole point of AI is that they have huge data sets with which to apply mathematical biasing to sort out shared context and find common patterns, so yes without huge data sets to apply their mathematics to, they can't produce anything consistent with the experience of those contributing to the data set.

>“AI” just regurgitates shit that a human has already written. Nothing more, nothing less.
Exactly, they are no different from human journalists.

It isn’t a reprogramming. It is the bot utilizing its original algorithm to estimate the most proper response to any sort of input.

There is still a fundamental set of rules and codes that govern that response. It isn’t thinking. It’s just doing.

Based

Cringe and kikepilled

>Exactly, they are no different from human journalists
I suppose my recommendation for finding reliable journalism is on the campus of a university which is not known for radicalized feminism. Journalists students tend to be more original than others.

Of course it is. It is only able to change certain parts of the program, within constraints, but it is changing the implementation of the target computation.

I know, it is literally my job.

If you object that there’s some other part of the program that training cannot change — that’s true if humans, too. We are just much better at it.

As to “thinking” or not, who cares?

It changes itself in the context of the language it is written in. Which was written by a person. There is no abiogenesis of AI. A person has written it, and it behaves in the way those persons wrote it. That is all I’m saying.

An AI cannot teach itself Japanese. It has to be programmed to do it. In the same way and AI cannot learn ethics, or learn the idea of dread, or what the meaning of it’s life is, or if there is a God.

It’s a machine.

>It is the bot utilizing its original algorithm
No it doesn't precisely utilize an algorithm to create output, it utilizes self modifying mathematical frameworks and structures to create dynamic algorithms based on the input.

>Anyone know any actually reputable and truthful fucking news sources
BBC tries to provide a balanced view (which sometimes means it gives excess airtime to minority opinions like climate change denial) but generally fair reporting.

Remember the Trump anti-corollary - just because a source disagrees with your echochamber views doesn't make it fake

>Journalists students tend to be more original than others.
No, boomer, they are just reading more progressive material than you and seem more original to your rapidly decaying aged mind.

nope. You gotta read between the lines.

Someone, a person, a human being, designed that process. It didn’t come out of nowhere.

>An AI cannot teach itself Japanese
Google Translate literally already did that years ago and using AI in translation is why there are some occasionally wacky translations when you try going between a bunch of languages.

>more progressive material
Does that mean material that happens to agree with your particular views?

AI can be designed to learn To translate between languages in general and then, without further specialization, learn to translate text between English and Japanese. That is it, that’s the utility — you the programmer did not have to know any Japanese to make the model produce useful Japanese. Can it ever “understand” Japanese? I don’t think so, nor does it matter imho. So what? Still more than the sum of its inputs, more than a program. No human engineer can write a better Japanese translator.

You are just resisting a very ordinary nuance out of some vague principle

You’d be surprised. At least at the university I live near, there is no restriction on the papers the school of journalism can put around the campus area. Further, it is prohibited to destroy or inhibit any magazine or paper produced by the department. So, you can get some very good journalism here.

>Google Translate literally already did that years ago and using AI in translation
Nope
geektime.com/2017/01/23/no-google-translate-did-not-invent-its-own-language-called-interlingua/

NN Ai for example is based on neurons, the mathematical framework of which, is not an invention of humans but a human discovery from nature after decades of measurement and neurological investigation.

Dude an actual dog can understand basic words.

Would you want to get your news from a dog?

No, just newer material you are unaware of that provides you with the illusion of originality.

Most AI is not neural networks and neural networks have almost nothing to do with biological neurons

The actual nature of the brain, neurons, or the functions of most of it outside of conjecture is unknown. Try again buddy. Don’t compare a chatbot to a fully functioning human brain.

No, they are trained to respond to noises for reward, they do not know they are words, they are animals and do not understand the concept of words.

Cat parent identified

This, OP. Everything else is just bread and circuses. 24 hour news networks, whether right or left, have one job and one job only - to feed you stuff they know you want to hear so you don't change the channel.

Well ... that’s true of machine learning models, too

That is something else, I wasn't talking about making a new language, I was talking about parsing one in common use.
It clearly says in the title of your article that Google Translate uses Google’s Multilingual Neural Machine Translation System for translation.

Google Translate is NN, NN is a valid AI that disproves your general claim aimed at all AI and with a sweeping statement like that I only need one counterexample, and I was just using it as one example because it was the easiest to explain to someone like you who doesn't understand the basics of AI.

Hasn't existed since 1913. That was the last year the news reported what mattered.

Heyy!...nice Elon

Attached: 1881573135037282.jpg (1564x1114, 125K)

They are machines and don’t understand the concept of words.

Checkmate.