Google AI Defeated Chinese Master in Ancient Board Game

Machine learning is quickly advancing as many countries are heavily investing into artificial ‘deep learning,’ the latest example is a Google AI program beat a Chinese Grand master at the ancient board game Go.

trunews.com/article/google-ai-defeated-chinese-master-in-ancient-board-game

>(BEIJING) A Google artificial intelligence program defeated a Chinese grand master at the ancient board game Go on Tuesday, a major feather in the cap for the firm's AI ambitions as it looks to woo Beijing to gain re-entry into the country.

>In the first of three planned games in the eastern water town of Wuzhen, the AlphaGo program held off China's world number one Ke Jie in front of Chinese officials and Google parent Alphabet's (GOOGL.O) chief executive Eric Schmidt.

>The victory over the world's top player - which many thought would take decades to achieve - underlines the potential of artificial intelligence to take on humans at complex tasks.

>Wooing Beijing may be less simple. The game streamed live on Google-owned YouTube, while executives from the DeepMind unit that developed the program sent out updates live on Twitter (TWTR.N). Both are blocked by China, as is Google search.

>Google pulled its search engine from China seven years ago after it refused to self-censor internet searches, a requirement of Beijing. Since then it has been inaccessible behind the country's nationwide firewall.

Other urls found in this thread:

sigai.acm.org/
github.com/deepmind
neuralnetworksanddeeplearning.com/
chess.com/blog/Billy_Junior/number-of-possible-chess-games
senseis.xmp.net/?NumberOfPossibleGoGames
en.wikipedia.org/wiki/Shannon_number
theverge.com/2016/11/4/13518210/deepmind-starcraft-ai-google-blizzard
riverscasino.com/pittsburgh/BrainsVsAI
youtube.com/watch?v=ztNYOnx_YQo
sscaitournament.com
youtube.com/watch?v=qv6UVOQ0F44
youtube.com/watch?v=opPKgY43Zwk
cmu.edu/news/stories/archives/2016/september/AI-agent-survives-doom.html
youtube.com/watch?v=94EPSjQH38Y
youtube.com/watch?v=DHY5kpGTsDE
twitter.com/SFWRedditGifs

>write procedural software to systematically bruteforce all future moves based on the current game state
>pick the one with the highest confidence of leading to win condition

Can this even be considered AI?
You can solve anything if you throw enough GPU compute cores at it, I wanna see this level of skill come out of a machine that actually learned to play go without bruteforcing.

Protip: It did not brute force.

>The board game is favored by AI researchers because of the large number of outcomes compared to other games such as western chess. According to Google there are more potential positions in a Go game than atoms in the universe

It's virtually impossible to brute force Go making it a favorite amongst AI researchers

Noob here

How does such a "deep learning" algorithm/coding look like of this Google AI?

How do you design and program "deep learning" AIs?

I'm pretty sure it's centered around allowing the program to change its own code in order to fulfill a directive.

sigai.acm.org/

Years late fag
github.com/deepmind
You retards think that it can modify it's own code.
Deep learning is more or less a complex search algorithm that finds solutions faster than brute forcing.

I take that back, it may be able to modify the source if it could be given a negative feedback, I don't know how it could do that. It's not the means to the end in OPs case.

>Go
>Ancient board game

>trunews.com
this happened years ago

the real question is, could this AI beat me in a game of Sorry!?

cuckmate.

>neuralnetworksanddeeplearning.com/
there you go, Sup Forums.

yeah untill this AI can beat me in videogames I don't give a fuck

From what I understand Go is a very fluid game, less so than something like Chess.

And you have a set amount of thinking time for the entire game, so using brute force on every move would likely result in it exceeding the time limit.

It was beating a Korean last year, or the year before.

From what I've heard, it's mostly searching via monte carlo, and the evaluation is done by feeding positions into a learning program, and making it play with it self a bunch until it's good enough.

There's probably some stuff I missed out on, but yeah, most programs, even in chess, use search and evaluate. It just so happens that this one has + tons of money.

On a side note though, actually "learning" the rules of the game is probably possible, but if the rules are well defined, and relatively easy to implement, why bother?

The same is also true of chess...

Chess is a far simpler game.

They're working on a starcraft AI, so watch out.

I agree though, once they can find a robot better than Flash then I'll believe it.

False. Chess has 10^120 possible game positions. There are only 10^80 atoms in the universe. Kys

According to the latest interview, the current Alphago thinks 50 moves ahead due to efficiency in guessing what the likely options are.

That basically meant that Alphago will have a lead against a human by the time they got to the mid-game. And that at mid-game Alphago is already calculating end game variations.

>implying

>Google AI finally defeats the chink

Go is really dumb and boring. Chess is by far much more interesting and fluid. Go is only complex to a computer due to the larger grid size.

>Chess has 10^120 possible game positions
lol

You may be right that Chess is more interesting, but that isn't the point.

leave Sup Forums forever

Yet Google maps still gets me lost. Its all hype

10^40 for possible positions, 10^120 for possible games

chess.com/blog/Billy_Junior/number-of-possible-chess-games

There are 2.082 × 10^170 sensible moves on a 19x19 Go board and 10^40 sensible moves on a chess board. Get fucked chessfag, Go is the Patrician board game.

senseis.xmp.net/?NumberOfPossibleGoGames

en.wikipedia.org/wiki/Shannon_number

Google AI should try to play LoL

Yeah, and all of those 2.082 × 10^170 moves are confirmed to be boring as fuck. Get bent pebble boy.

Google AI would get rekt

Power of Macbook Pros.

I could arrange my dick 2.082 × 10^170 different ways inside your mom, but that doesn't mean it's superior to chess.

>They're working on a starcraft AI
I fucking wish

...

unless the AI using keyboard and mouse it not fair

Dude, have you *seen* my mom?

They've already made a DOOM AI whose input is nothing but the pixels on the screen and whose output is keyboard and mouse events.

It significantly outperforms the builtin AI and other machine learning AIs

theverge.com/2016/11/4/13518210/deepmind-starcraft-ai-google-blizzard

...

>Ancient Board Game
>oh shit what is it
>Go
I got clickbaited but without the clicking part

Is it possible for AI to simulate games of poker?

Yes. There is nothing that AI will not be capable of doing better than baseline humans, it's just a matter of time.

riverscasino.com/pittsburgh/BrainsVsAI

the problem with go is its branching factor, chess has nothing even close to it

>Machine learning
>Not bruteforce
Pick one and only one.

I wish rito was competent enough to develop an offline API for playing/replaying

this is very important research but i want to briefly summarize its limitations since you'll be hearing a lot of hype

the fundamental problem with games as a testbed is that human beings are shit at games. this is why we enjoy them, because they require thinking that is difficult for us

from an algorithmic perspective, Go is not an absurdly hard problem, and a big part of AlphaGo's success is that it plucked low-hanging fruit. Go is a "long-standing challenge in AI research" but there is zero funding to work on it, so even very simple modern methods had not been applied to Go.

Google has enough money to spend on "fun" projects like this. the actual architecture is not revolutionary and Go is not a problem that thousands of researchers were stuck on.

i get the impression that your definition of brute force includes literally every problem in NP-H and beyond

there exist problems with no polytime solver, get used to it

>no known polytime solver

Your brain is bruteforce too, retard. There's literally more cells in the brain than there are atoms in the atom.

we've proven EXP-complete != P, and
>Other examples of EXPTIME-complete problems include the problem of evaluating a position in generalized chess, checkers, or Go (with Japanese ko rules).

AI with 40000 apm cannot lose to a human past midgame.

It seems like most people in this thread are claiming that high APM and the ability to have perfect control over everything happening on the map will give the AI an advantage that will force a win.
Here is a video of one of the best current StarCraft bots losing to an D-rank (low skill) human player. The bot's APM is ~5500 while the human's is ~200. youtube.com/watch?v=ztNYOnx_YQo
The fact is, no AI has ever beaten even an amateur player in a tournament. Even with great micro, if your play is too predictable then the human will learn it and exploit it.
I, for one, am very excited to see the development of new StarCraft AIs. And especially SC2 AIs so that it can challenge the current world champions.

well it can if you bait it into a local minima of sorts and it does not randomize well
but yeah humans stand no chance against a well made one

>more cells in the brain than there are
>atoms in the atom
>atoms
>atom

SC2 seems like it'd be very good for the AI if Terran.

Marine micro and those reapers are both early game. Could constantly harass a human player.

How long before we can build a human capable of beating top AI players at Go?

> What is branch and bound

>How long before we can build a human capable of beating top AI players at Go?
Well at some point in the distant future, Go would be solved completely. At that point who goes first would determine the outcome.

Are there AI vs AI tournaments for stuff like this?

Maybe he meant atoms in the cell? He's kinda right though. Our brains are super inefficient at making calculations compared to any form of microprocessor. The human brain has far more in common with these "bruteforcing" neural networks than it does with conventional CPUs (yes I know it's not actually brute forcing)

you are right
the main algorithm is Monte Carlo, and the ANN is used to evaluate the "quality" of a board configuration
this evaluation function needed by monte carlo is very very important and shouldn't be looked down upon

Yes. sscaitournament.com

There is always matches going on.

Tournaments are once a year I think, and the winner usually goes on to fight a human

It can already beat you in Breakout.

How does one "save" all the training and experience that a machine-learning AI obtains during all its time practicing with itself? It's like an application running across a ton of GPUs networked together, right? Can it be un-loaded and re-loaded to different hardware? Or does the AI have to constantly be running on it, otherwise it will have to relearn everything?

forgive me if these are stupid questions, this is a new concept to me

I wanna know this as well

oh sweet child

What it "learns" is the correct weights to its neurons' sigmoid function. The entire "brain" can be stored in less than a kb (just a bunch of floating point numbers)

the basic idea behind all machine learning is to find a function that "solves" the problem, e.g.
function(current_board) -> next_move
i say "solves" because machine learning is most commonly used when no practical solution function can exist (this is very common, and we have proven this is true for Chess and Go). the learned function is just a "mostly right" heuristic

the "learning" process consists of testing many candidate functions, but the final "AI" is just the best function. so you can "run" the AI if you just save the best function. however, if you want to continue learning, it is helpful to retain information about the failed candidate functions, so you don't re-test them again

"neural networks" are just parameterized functions in which the parameters are compositions of much simpler functions like "max", "multiply", and then other stuff you might not have heard of like sigmoid and convolution functions (which are still just simple math crap). the reason they are powerful is because you combine a LOT of functions and we have very good methods to explore candidates

That's not how it works, though.
That's the scary part.

It actually learns.

>People are going to believe that this automatically means AI is superior to humans
>mfw
will the retardation ever end?

cont.
the reason people say neural networks are "hard to understand" is not because they're conceptually complicated. what i just described is basically all there is to it even though it's extremely simplified. the problem is that a pile of sigmoid parameters doesn't reduce to anything like a "rule list" for how to play Go, so it's very hard for us to learn something by inspecting the neural network. right now, the way that people learn from AlphaGo is by inspecting its actual games

if by "actually learns" you means "descends a gradient to find a locally optimal composition of parameters." that's no small feat, but probably the greatest sin of AI memesters is to reduce all learning to the current accomplishments of ML, when in fact we know very little about how learning works. there's obvious and basic tasks that ML has never accomplished

the entire reason to pursue Go was for the futurist hype, it has no practical value. i assume the DeepMind folks are smart enough to not recommit the "we can play chess, so object recognition will be solved in 5 years" fallacy

Is that how machine learning works?
If i remember you give a prerequisite number of commands and then the computer leanrs which one to do when, like in video related:
youtube.com/watch?v=qv6UVOQ0F44

>Googlebot has beat Korea and China

If it were a Japanese guy he would have committed sudoku.

The problem is the hype though. Most people don't understand the process behind it so they assume it is magic and will ever compete with the human mind, let alone succeed it.

Stephen Hawking, a physicist, claimed false shit like a computer than ran when unplugged calling itself God. The Retardation never ends it seems.

I don't doubt that AI will be very useful as it can learn to do certain operations allowing for a high level of autonomy, but Machine learning is like a clock that can automatically shift gears, it is not the path to mimicking the human mind in any competent way.

Machine learning could be useful for game AI as it would allow for AI that can match your skills.
If i remember the AI in Video related used this method of learning and became quite the challenge to beat.
youtube.com/watch?v=opPKgY43Zwk

>The problem is the hype though. Most people don't understand the process behind it so they assume it is magic and will ever compete with the human mind, let alone succeed it.
Aren't you making the mistake assuming that human mid is literally magic, then?

nope.
I just don't think AI can match the human mind without matching the bio-mechanical processes behind it. Computer Architecture itself i doubt will be able to match the human mind.

And for those who think consciousness is the Epiphenomena of the mind, i its not. I doubt AI will ever be conscious, let alone feel emotion.

I think with more complicated games, the AI benefits greatly from getting a little bit of human assistance during the initial learning stages. Otherwise it takes it ages to learn some things. In DOOM for example

>Though the AI agent relies on only visual information to play the game, Chaplot and Lample used an application program interface (API) to access the game engine during training. This helped the agent learn how to identify enemies and game pieces more quickly, Chaplot said. Without this aid, they found the agent learned almost nothing in 50 hours of simulated game play, equivalent to more than 500 hours of computer time.

cmu.edu/news/stories/archives/2016/september/AI-agent-survives-doom.html
video:
youtube.com/watch?v=94EPSjQH38Y

>If i remember you give a prerequisite number of commands and then the computer leanrs which one to do when, like in video related:
training examples are a way of speeding up the search for candidate functions. you can treat them as ground truths, e.g. "given this board, the best play is definitely this move." or you can treat them as higher-order operations, e.g. "this collection of little moves should be considered as one big move when you explore candidates"

in some cases training examples necessarily define the optimization target. for example, in Go winning has a precise definition, but if your target is to recognize a bird then you need training examples to define what a bird is. but there are cases of unsupervised AI building a function that can distinguish e.g. cats without being told what a cat is (because a cat has distinct features and they co-occur together a lot, whenever a cat is in a picture)

>I just don't think AI can match the human mind without matching the bio-mechanical processes behind it.
Since the bio mechanical processes are as much a black box as Alphago's value network, no one is in any position to make that claim.

>And for those who think consciousness is the Epiphenomena of the mind, i its not. I doubt AI will ever be conscious, let alone feel emotion.
Can you ever proof that I have a consciousness or have emotions? Can you ever proof that you are not the only human on Earth who is concious, and that everyone else are soul-less meat robots?

Just because you know YOU are self aware, doesn't mean you know anything about anyone else.

it's not clear whether the biological processes are particularly special, it's a common fallacy to think that evolution finds radically optimal solutions. i am certain the brain is full of dumb shit, hacks and kludges

the thing is those kludges may be necessary to get human-like performance. there's no guarantee that good functions are "elegant." neurobio guys are painfully aware of this, but the AI guys have a lot of trouble recognizing their own limitations because the world is sucking their dicks and funding them. this has all happened before. there probably won't be a crash but there very well could be a ten or twenty year doldrum where advertising and stupid home products is "good enough"

>there are more potential positions in a Go game than atoms in the universe
KEK

thank you based black science man

the AI guys invite these philosophical critiques by perpetuating radically reductionist ideas, like "neural" networks and "learning." there's nothing wrong with reductionism when you recognize it as an experimental toy model, but the futurists blur the distinction between contemporary toy models and reality, which is lazy science

i don't think contemporary AI is philosophically interesting, I for one can understand what a composition of sigmoids is and how you would relax it by backpropagation. it's the futurists that keep trying to spin a question of consciousness out of this very simple shit. the brain is not simple

Could a complex enough function be created to accurately simulate the behavior of an insect or possibly an animal? Maybe even a human?

>Just because you know YOU are self aware, doesn't mean you know anything about anyone else.
I always wondered if there was a name for this kind of thinking. Apparently it is called "epistemological solipsism"

I suppose i am, as they say, BTFO?

I will aquiesce, there is indeed much that is unknown about the human mind and how it operates, but there is more known about 'AlphaGo's value network than there is about the human mind, since the engineers who build AlphaGo's value network understand its function and what went into making it the way it is. So if anything it a blackbox, it is certainly the organic brain and not AlphaGo's value network.

>Can you ever proof that I have a consciousness or have emotions?
no i cannot, I can only assume that is the case. The fact that you and i have the exact same archetecture gives me confidence in assserting that you are like me in the capacity for consciousness and emotion. If AI came close enough i would be confident in the same knowledge, though if it has a soul is another question which i feel has an obvious answer.

>the thing is those kludges may be necessary to get human-like performance.
i would think so. Being a believe in the wabi-sabi concept of imperfection being perfection i feel that a perfectly structural AI would lack something human.

> the AI guys have a lot of trouble recognizing their own limitations because the world is sucking their dicks and funding them
damn right. I think that as humans we will not be able to build anything like the human mind, as we like order and structure (like blueprints, or circuit boards), two things that are sometimes lacking in nature. If we want to make Human-like AI we will need to embrace this fact and start creating 3-D circuits.

>there probably won't be a crash but there very well could be a ten or twenty year doldrum where advertising and stupid home products is "good enough"
see example related to see the levels of retardation.
youtube.com/watch?v=DHY5kpGTsDE

>atoms in the atom
senpai...

>I always wondered if there was a name for this kind of thinking. Apparently it is called "epistemological solipsism"
Well the matter here is that in order to definitely claim that a computer can't have a consciousness, we would need to be able to probe that humans have consciousness. And it isn't good enough that every person know they themselves have one, that isn't proof that someone outside of yourself is the same as you.

They're right though, at least for the known/observable universe. Just for Chess:
>According to the Shannon Number there are around 10^120 number of possible moves in chess, while only 10^80 number of atoms in the observable universe.

>There's literally more cells in the brain than there are atoms in the atom.

I guess you're not wrong

>Could a complex enough function be created to accurately simulate the behavior of an insect or possibly an animal? Maybe even a human?
probably not, no. We huamns create simple stuff becaue simplicity is our saint.

>I always wondered if there was a name for this kind of thinking. Apparently it is called "epistemological solipsism"
fun fact. Epistomology is the philisophical school of thought pertaining to knowlegde, its attainment, and whether we truly have it. Philosophical Skeptics believe that we cannot attain any knowledge at all, though how they KNOW that, i am not sure.

Epistomological Solopsism being a form of skepticism is weak in the sense that it assumes that it KNOWs that its claim is the case, without proving it. We can assume that Other humans are conscious, since when we open them up the archatecture in them is the same as us, they bleed, we bleed. So it follows the logical assumption of similar patterns, Humans around me act like i act, so logically they must have some capacity to be like me, since i am conscious, they must also be conscious.

>Could a complex enough function be created to accurately simulate the behavior of an insect or possibly an animal? Maybe even a human?
well from a certain perspective of physical realism that's what an insect is, insofar as physical laws are mathematical then they demonstrably operate on certain collections of matter to produce a working insect or brain

there's a lot of assumptions baked into that statement about the nature of physics and its relationship with computation. in fact modern physics has a lot more in common with computer science than people realize, information theory for example

having said that, it's very common in computer science to get "possible, but..." results where e.g. a certain computer architecture cannot describe a function that runs in any *practical* amount of time. the relationship or anti-relationship between current computer architectures and biological systems is an open question

it is worth remembering that good researchers are neither celebrities nor business people, so don't get too deluded by entrepreneur shit. there is real work being done by people who are not morons, they just don't advertise it

please see my argument here:

I suppose it is worth remembering that.

>Epistomological Solopsism being a form of skepticism is weak in the sense that it assumes that it KNOWs that its claim is the case, without proving it. We can assume that Other humans are conscious, since when we open them up the archatecture in them is the same as us, they bleed, we bleed. So it follows the logical assumption of similar patterns, Humans around me act like i act, so logically they must have some capacity to be like me, since i am conscious, they must also be conscious.
You only weakly prove that humans have consciousness. But since you still don't know how to define conciousness, you can't claim that humans are the only ones with that. The issue is that5 you are trying to prove machines an't have conciousness by claiming they are made of different materials.

You have NOT proved that human bodies are the only way conciousness can manifest. You are just assuming all swans are white because you have never seen a black swan.

machines still can't play Calvinball

I think he just means that it's impossible to *prove* others are conscious because you can never be someone else; you can only interpret the world through your own senses. It's the obvious conclusion to say that other people actually are indeed conscious like yourself, especially when you can have a clone or identical twin with your exact genetic makeup and observe them behaving just like you do. But that conclusion is still based on an assumption that cannot ever be proven.

We can't confidently make this same conclusion about AIs ever having a consciousness because they are so different from us (or so we assume, considering we don't understand how consciousness arises in the human brain).

If we were ever to make a machine learning AI that is an exact simulation of the human brain, then I guess we could say that it has a consciousness with the same level of certainty that we say that other people do