The current speed of AI research progress

>The current speed of AI research progress

2spoopy

Is Sup Forums ready for the future?

Other urls found in this thread:

arxiv.org/pdf/1802.07740.pdf
alexirpan.com/2018/02/14/rl-hard.html
science.sciencemag.org/content/359/6377/725
blog.openai.com/
arxiv.org/pdf/1802.07442.pdf
electrek.co/2018/02/21/elon-musk-leaves-open-ai-tesla-ai-effort/
extremetech.com/extreme/262510-new-report-self-driving-cars-ranks-tesla-dead-last
twitter.com/open_ai
twitter.com/openai
twitter.com/NSFWRedditVideo

((they)) will never let AI get too good, at least not for public use, it will cause too much of an economic shift.

Memeing will not stop the rise of AI overlords. Bette be prepared.

arxiv.org/pdf/1802.07740.pdf

silly Sup Forumsposter

>Memeing will not stop the rise of AI overlords. Bette be prepared.
I am not memeing.

"Experts" are literally saying we need to keep AI research away from public access, for "security reasons".

>predict world model loss
Interesting. Source?

Source? I haven't hear anyone say that. Maybe you're listening to fear mongering sources.

Neural reinforcement lwarning doesn't work
alexirpan.com/2018/02/14/rl-hard.html
AI has a reproducibility crisis
science.sciencemag.org/content/359/6377/725

why the fuck are you afraid of a bunch of matrices finding local optima to multivariable functions in random ways you /r/futurology shitstains

go suck off carl sagan's dead body somewhere else

>"Experts" are literally saying we need to keep AI research away from public access, for "security reasons".

Ackchually, "Experts" are literally publishing open souce papers to maximize public AI research. The undelying thinking is that if multiple AI's rise at the same time form different souce no one will prevail on the others.

blog.openai.com/

arxiv.org/pdf/1802.07442.pdf
>muh vintage clocks
fuck off

who the fuck said anything about clocks
get le black science man's dick off your mouth before speaking

>memeing about AI
>Literally all it is is statistics

Open AI winning against a TI champion team 5x5 when?

Soon. This year maybe?

no shit, fear-mongering over finding function approximations using statistics.
> but muh saviour elon musk s-says its gunna be like t-terminator!

Yeah all AI research is about finding ways to statistically improve the training.
It's just fiction that thinks of AI as the achievement of sentience.
We don't have a clue what sentience is so how could we create it? Unless it's something like (but not necessarily) system complexity (which would necessitate the universe, galaxy, solar system, earth and downwards recursively to be sentient). We will probably never accidentally discover it.
We will never be able to tell either. It's feasible that sentience is much more common than we think too.
It's just bounded with humans as the lower bound for convenience. Just like how certain human groups don't have souls for convenience reasons according to ideology.

Why you bring this up here is beyond me.

>arxiv.org/pdf/1802.07740.pdf
So, yet again 'state of the art' refers to grabbing papers from the 70s/80s, applying them, and putting a big name atop a new paper?

> Steadily adding on 'duh' features with no overarching architectural concept..
> no spoopy at all

Yes. Why is that a bad thing?

These same 'experts' were claiming we should open and publish all AI research some time ago. It doesn't seem they are 'experts' or have reasoned foresight. It seems they are carefully weighing business decisions, market strategy and tuning their mouth pieces to try to sway the public when they do. Either way, it doesn't matter as its just noise.
Fear/Uncertainty/doubt .. modern day marketing/manipulation tools to drum up attention and funding.

>science.sciencemag.org/content/359/6377/725
^someone gets it..
> this week in hacked/slapped together weak AI...
Another paper was dusted off from the 40s,50s,60s,70s,80s titled state of the art and claimed by 'AI experts' after rephrasing some words and applying it. Meanwhile, the fundamental hard problems aren't being reconsidered or even worked on cause you know.. Gotta pump out state of the art white papers every week to justify muh $280k salary.

Yes because in 70s/80s we didn't have the processing power. Now we can implement.

I really doubt so, it's orders of magnitude more complex than a 1v1 with restrictions.

Wrong and OpenAI aren't experts.
I think Elon just resigned from the board recently after getting what he needed for his flailing self-driving car tech at Tesla. OpenAI also recently joined in to back the voice of flip-floppers declaring a lot of AI research should be kept private. Meanwhile, brainlets don't know what to do now that their borg mind has flip flopped. Stil parroting what the borg mind said in the past :
> Everyone should give billion dollar IP away to rich companies and shit bro.. think of the children

>I think Elon just resigned from the board recently after getting what he needed for his flailing self-driving car tech at Tesla.
I think you're talking out of you ass since both Elonmeme and OpenAI guys explicit said he resign to AVOID potential conflitcing interests because of Tesla.

You guys are shilling way too hard.
Maybe it could first try to win a 1v1 without basic bitch rules of interaction scripted and restricted. See pic and paper next to player :
> Please don't do this or you'll break out precious dumb algo bot
> Still breaks their bot and kicks its ass
> muh amazing AI experts
> give shekels
> weak ass bot

top kek.. It's just faggot tier marketing for shekels. These guys are still stealing IP/ideas of forgotten visionaries from the 40s-80s putting their name atop whitepapers. Absolutely clueless of how to truly create something original and profound themselves.

Yes. Computer science may seem stagnant but it's mainly because what's come before is very general.
Consider Ahmdahl's law.
They're referencing the old stuff usually. If you find that they haven't explored previous research sufficiently you report that to the university.

It's quite frankly not AI at all .. It's applied statistical optimization.
It's not a bad thing. It's an overlooked and big thing as the credit and attention should focus on the pioneer/true expert from the 70s/80s. Not some derps trying to justify million dollar salaries who spent a weak fumbling shit tier code to try to test its validity. It also reflects how little these 'experts' know such that they continually have to dig up what TRUE EXPERTS were doing 30-60 years ago.

Furthermore, given that's what's going on .. then there is no big difference between a billion dollar group of 'ai experts' and someone capable of accessing white papers from that period, copying their ideas, and writing code... But dick riders will convince you others :
> THEY BE KANGZ AT COMPANY X/Y/Z

Brainlets are unable to understand this. Also, they gave the challenger a script and reduced the interaction down to reaction time which any plugged bot can do. Hilariously, in a short span of time, he figured out how to break their bot and still beat its ass.
> mfw shit tier code gets shamed
> mfw marketing stunt gone wrong
But brainlets will believe it was ground breaking AI because they must always BELIEVE in something/someone other than themselves.

>not AI at all
It's what people know as AI. Describe what you think AI is.

>they must always BELIEVE in something/someone other than themselves.
the public doesn't understand anything about AI at all and the people doing shit like wheeling out a computer and facing it off against the most washed up dota2 player to prove its 'superiority' aren't helping

> Elon slathers his name all over a crackpot consortium of rag tag individuals who have no plan for how to move AI forward
> Mission statement changes every week
> No one knows what the hell their purpose is
> Elon begins his propaganda campaign declaring the end of the world is coming due to AI : Oh by the way I have an organization who will save us all. Please give shekels to OpenAI
> No one still doesn't understand wtf their purpose is
> OpenAI declares they're goal is to cheerlead suckers into publishing valuable IP for free so other companies can profit from it
> OpenAI figures out they have to show something so takes other people's ideas and bottles it up into OpenAI Gym but forgets about fundamental shit like latency/etc
> It's an epic disaster but is spun up as grand in the usual media outlets
> OpenAI retrofits its website with stolen conceptual framing
> WE AGI NOW
> Elon pooches talent for his Tesla venture
> Begins toning down the coke+wine induced ranting
> Sees that the shill fest resulting in nothing grand
> Distances himself from OpenAI to focus instead on trying to finally turn a profit at a single venture
> OpenAI signs on with new endorsement that AI tech/research should be kept private/secret or it will be a threat to the world.. completely reversing their previous stance
> Have figured out that : make me regulator i'll save the world might not work
> Continue randomly rebooting old pioneer/visionary's IP as their own .. hopefulyl hit jackpot

The strategy for a large number of 'prominent' AI groups has always been : Make fantastical bullshit claims up.. Put together some canned demo that brainlets will get excited about .. Ask for shekels, get well funded, republish forgotten pioneer's whitepapers every week and try their best to live up to the lofty promises they made to investors.

I think I know exactly what I'm talking about and I think you need to stop shilling your shit tier group on Sup Forums.

what are the chances that people play dota2 browse Sup Forums etc and are in an AI DNN/CNN related field ..
Pretty high I guess.. :P

There's a difference between referencing and relabeling. I'm sure I'm clear on the structure of white papers. What's comical is, if you go to the top 3 referenced papers most often it reads like a dam near copy of everything found in them. That's not referencing. That's literally copy/pasta... something that is rife in academic circles. Lets not forget about the guy who created LSTM in the 90s and titled his dam paper as such only to have it stolen and claimed by others w/ no prominent recognition. I recall him being boo'd and even laughed at in a top conference when he complained as such... So much for academic integrity.

What conflicts of interest? They can say whatever they want for PR reasons, doesn't make it true.

>meme design
>canned demo
>republished forgotten whitepapers

I guess you've done a quick round in the site and memed after it, well done. Try to read and understand the paper they've published, if you are able to, they aren't repruposed concept at all.

> It's what people know as AI.
It's what brainlets know of AI...
> Describe what you think AI is.
Present day? applied statistical optimization.
Indeed. These guys are out for money #1. Everything is else is secondary.. They do ridiculous expositions and laud them as something they're not because they know it will win over brainlets and money. Afterwards, they figure they will try to do the actual hard work they promised everyone and nothing of true promise results. So, they then go ruffling through forgotten hero's white papers trying to find out who truly did the hard work, give them a quick reference, and try to spin someone else's work as their own for profit. It's down right despicable but is the nature of our current world .. Absolutely no integrity, lies to dumb down, manipulate and fool the public for profit. Theft of other's hard work.

And the kicker is, they also gave this washed up player a script restricting interaction to who can press the attack button first once in range. Even then, this clown managed to win and break their bot because the code/capability is that shit tier. It's not the first time they fumbled the ball. They released OpenAI gym and since tehy don't have any actual engineers there, they forgot about communication latency between their platform and the target program thus it was an utter failure. Elon Ventures everyone...

>Sre they're lying, I AM right.
You forgot the thinfoil hat.

Whose paper? I read them regularly and have a graduate degree.. Brainlet. Link to the specific paper you're referring to and I'll give you a non-brainlet digest in 5-10min.

>OpenAI
>AI tech/research should be kept private/secret
good goy

This is a funny concept

top kek.
Originally they were of the completely opposite position to try to bait autistic geniuses into giving away billion dollar IP thus the name :
> OpenAI

Then they hilariously reboot their website with designs lifted from someone else and declare they're magically building AGI. Then they flip flop and declare ai research should be held private now that they've milked the open research/apocalypse idiots.This is who brainlets think the future will be delivered from.. A bunch of class A con-artist. Elon conveniently exits stage left due to some b.s PR blurb after he's poached the valuable talent for tesla

electrek.co/2018/02/21/elon-musk-leaves-open-ai-tesla-ai-effort/
>Last year, Tesla ended up hiring Andrej Karpathy, who was working as a research scientist at OpenAI before joining the automaker. Open AI’s work is open-source so I doubt that Tesla profiting from Musk’s inside information at the non-profit was a big concern. But I think that hiring researchers from the organization might have been the bigger issue.

Shaddy ass nigger

Their logo is a literal Star of David

extremetech.com/extreme/262510-new-report-self-driving-cars-ranks-tesla-dead-last
> New Report on Self-Driving Cars Ranks Tesla Dead Last
WE WUZ KANGZ AND CHIT

Look at their prior logo and website via archives and look at the redesign... Also, ask yourself what that logo has to do w/ open AI. It looks rather closed and knotted to me. They then go on to use it continually throughout their website to dawn every section and subsection to the point of abuse.Seems some shit tier web developer/artist was given a seed source of ideation and he went full retard w/ it. Theft of IP, ideation, and spinning out other's ideas for profit is their bread and butter though. So it all makes sense in the end.

Also their Twitter account is now protected:
twitter.com/open_ai

lol, you're joking right?
Post screencap

are those the property brothers

Typical "AI" experts

twitter.com/openai

@open_ai was the original official account if you check the archive of their web site

>present day?
No obviously what you think AI is.
As you said 'it's quite frankly not ai at all'.
I'm asking what you think AI is.

So far it just sounds like you're complaining.

can someone define AI for me?

thanks