Sup Forums still never talks about AI despite the recent steady progress in supervised...

>Sup Forums still never talks about AI despite the recent steady progress in supervised, unsupervised and reinforcement learning

AI is a paramount technology that will change the future of humanity forever. Clocks will not.

Other urls found in this thread:

arxiv.org/pdf/1711.03953.pdf
youtube.com/watch?v=9Yq67CjDqvw
theverge.com/2017/4/24/15406882/ai-voice-synthesis-copy-human-speech-lyrebird
twitter.com/SFWRedditGifs

>no discussion about AI
>no discussion about blockchain tech
>no discussion about decentralization projects
>Sup Forums - Technology

Normally the discussion about AI devolves into muh UBI with a few people who know the limits of current developments who used ML techniques instead of watching a TED talk about the future of AI

>Sup Forums - clocks & chink shit

You start

Fine.
This just achieved SOTA on Penn Treebank:
arxiv.org/pdf/1711.03953.pdf

How do y'all think public perception around AI will form?

We're already seeing people being creeped out by iOS's photo categorization, though this may be due to bad communication about the feature on Apple's part.

A lot of less tech savvy people I speak to don't like the idea of self-driving cars at all, actually saying they wouldn't feel safe. The state of computing, where the average user is afraid of "breaking" things, doesn't help this faith.

The "robot uprising" gets thrown around as a joke a lot, but that also makes it part of the pop culture viewpoint on AI. Dystopian movies aren't helping here either.

No doubt the tech's gonna evolve into its final form, but what will it take for people to come to terms with this? How much of a too-late-now shitshow will AI legislation be?

Also important, given enough input footage, we're starting to be able to synthesize footage of people saying arbitrary sentences.
youtube.com/watch?v=9Yq67CjDqvw
There's still a lot to be done here (it'll be a little while before we can do non-static shots of anything), but propaganda's about to get some very strong firepower.
This moves more into the crypto space, but what can be done to verify authenticity of content in a post-synthesizing world?

I remember reading something a while back about a company making some software which can synthesize a voice perfectly from about 10 seconds of original audio but they wont release the software. That along with this would be mental

Sorry it was 1 minute not 10 seconds but still pretty cool theverge.com/2017/4/24/15406882/ai-voice-synthesis-copy-human-speech-lyrebird

Christ that's spoopy. Expected, sure, but still quite something to get your head around. Combine this with surveillance-state levels of data on practically everyone and the only thing we'll be able to trust is face-to-face communication.

Tangentially related, what does Sup Forums think about data gathering for AI learning? They have to get their data from somewhere, ideally as diverse a sampling as possible. Should surveillance agencies help teach AI?

my opinion on data collection for ai is fairly mixed on one hand companies such as facebook and google have vast amounts of data which is great for developing and advancing their artificial intelligence. I think this is also bad however since the fact not many other companies or groups have access to such large data sets means that google and facebook could basically set up a monolopy over ai just imagine they create some ground breaking ai and no one else could make anything which could even come close just because of the data limitations they could bring this to market for various different uses and would have the market completely locked out to competitors. There is also always the chance that they could use ai that they develop with malicious intent but i highly doubt that would be the case. In regards to surveillance agencies i think as long as all data is completely anonymised then theres no harm done. But you never know what the intent is from the parties who are developing ai you just have to hope it is for the better good. What are your thoughts

Just show me some neato AI things

As long as hard and software isn't free we can't think of anything like that

AI still shits all over itself for general purpose inference and not overly specific tasks
what the hell are you talking about OP, are you one of those LE SINGULARITY IS NEAR XDDDD redditards?

If the AI is self-improving, the FOOM hypothesis says that is will exceed human capacity very rapidly, within hours to months tops. If it is true, then the public perception won't matter, because the AI can stay hidden until it is too powerful and it's too late and whoever controls it can rule the world.

>AI

Not AI. Machine learning. Everything you mentioned is machine learning. And machine learning is just applied statistics. You don't even know what AI is, yet you want to start a conversation about it. This is the typical conversation I have about AI. You're a moron, and I'm explaining to you why you're a moron.

>LE SINGULARITY IS NEAR XDDDD redditards?

Because as you can see Sup Forums is about contrarianism, particularly against reddit.

They want to think that Sup Forums is just realistic reddit, but instead they make it conservative, and thus fail to see that new technology doesn't need to solve everyone's problems to be amazing.

come on user, you should know none of us are actually smart/competent enough for that shit, everybody here is just a knuckledragging shitposter

Just read r/Futurology

>no discussion about blockchain tech
Go to the whitepaper threads on /biz/ I guess.

Learning is statistics which is a heavy subset of AI. Your brain's prediction models are based off of probabilities calculated from previous observed events.

I want this technology first so I can make obama say lewd things.
or just distribute the technology to Sup Forums so they can make funnier things than me

>Your brain's prediction models are based off of probabilities calculated from previous observed events.

well said

for more on this, read God's Debris

Honestly can’t wait for ai to be used more for punishing criminals, less cops and less thugs walking around.

This is already happening in parole hearings - or at least it was before. There were programs that were being used to determine the likelihood of re-offending by inmates. From what I understand, they're no longer being used because they were racist against black people.

>What are your thoughts

I don't want my personal AI assistant to run on Facebook's servers. I want it to run on a server I control.
Not a chance that whoever ends up developing such a tool will make it available outside of their own ecosystem though. And with how it can learn from you (the user), we're basically strengthening the "but Google gives me the results *I* want" layer of lock-in.

Remember, monopoly's only good for the ones holding it.

What can be done to prevent this? Should data used for AI training be made public, so that all developers get an equal shot *and* companies that don't properly anonymize their training data can actually be punished for it?

I really want to tackle this from the "stop gathering data" angle, but we need to feed our software increasingly large amounts of it it we want to keep progressing...

>they're no longer being used because they were racist against black people.
This is an interesting case study for showing the importance of input data, and how training machines using historical data can cause a regression to the past.
But then, the present doesn't hold enough data because it is ever fleeting, and having it guess at future data out of trends risks pushing us off the other end of the scale.
Is AI doomed to live in the past?

We all are user

>read God's Debris
care to give us a quick overview of the material covered?

tfw people not in on it are actually going to believe machine-learning memes are real footage
satire sites like the onion are gonna boom with this, too

In a sense it's already in use. Law enforcement is pushing crime analysis pretty heavily and there are a few methodologies with CompStat probably the most well known one for the public. Predictive policing is the newest buzzword which is basically AI for policing. What's pretty interesting is that criminals used this information to counteract.

Like said a big problem is "racism". A lot of tools shouldn't access demographics data although it is a pretty good predictor.

Because AI is /sci/, not Sup Forums.
All Sup Forums is about nowadays is smartphones and CPU benchmarks.