So this happened in 2015 and I have no idea if this was discussed here but I just watched the documentary on netflix.
I was impressed up until they explained how Alpha came up with its moves on the board.
Isnt Alpha just the equivalent of bringing a cheat sheet to a blind maths test ?
And isnt using thousands of previous games as references for moves just encoding human error into its play style meaning that alpha loosing that one game was just inevitable in comparison to the infinite monkeys with typewriters eventually arriving at Shakespeare concept ?
ts rl: I know Alpha isnt even close to AI but why did people get so fucking hyped for this cheat sheet robot.
Image take millons fanfics and found invariant structure well write stories to make some system could write Shakespeare level quality.
Daniel Cook
fucking what
Oliver Hughes
> why did people get so fucking hyped for this cheat sheet robot. Because I FUCKING LOVE SCIENCE
David Stewart
People were hyped because google managed to achieve what no one could before - produce a computer program that would win in a game of go against a pro.
Alphago is not equivalent to bringing a cheat sheet to a blind maths test. The comparison is inane, I have no idea why you would come up with that.
Adam Murphy
welcome to machine learning, retard
Lucas Walker
It makes movies based on the probability of winning from previous games. It only places tiles on the grid areas most likely to achieve a win overall. where as you have a human playing who at the most only has revised strategies.
The comparison is super hindsight against what is basically linear thought.
Jack Sanchez
is it learning when it already has answers and provided information to make the best moves ? ?
Brayden White
We know fuck all about how humans play, so you can't make an honest comparison.
>It only places tiles on the grid areas most likely to achieve a win overall That is a very gross oversimplification of an already simple procedure alphago follows. I think your netflix video didn't actually tell you what alphago does.
Brayden Bailey
We summon the spirits of the computer with our spells.
Noah Howard
well go ahead user, explain what the lead programmers of alpha go could not in my netflix video
Luis Cruz
The neural network estimates, given the board state, the likelihood of a player placing his stone on each board cell, and those probabilities are used to do a usual monte carlo tree search.
Connor Ortiz
I love wikipedia too user, Well done on trying to sound more intelligent than you actuly are
Nathan Cruz
user, your point is not valid. This is nowhere near having a cheat sheet, and "movies based on the probability of winning from previous games" does an aysmal job of describing the process. If you want to talk about alphago, the first thing you should do is to show that you understand what it does. You did the reverse of that.
My explanation was valid as any other semantic crap you copy pasted from a wiki article.
Justin Allen
I never opened the wiki page. I explained to you in simple terms how alphago works. None of this is semantic crap.
Cooper Fisher
this user gets it
>None of this is semantic crap. perhaps take a moment a re read the thread
Christopher Davis
The document linked says exactly what I said.
>The neural network estimates, given the board state, the likelihood of a player placing his stone on each board cell, and those probabilities are used to do a usual monte carlo tree search.
David Jenkins
>the likelihood of a player placing his stone on each board cell, and those probabilities are used to do a usual monte carlo tree search.
yes user, keep going you almost have exactly what I said (almost semantically). what does it use to inform these decisions. surely its not just blank maths formulas ? its almost like it uses some kind of available/pre programed something or others to make the best moves. I wonder what these trees are almost implicitly made up of ????
Owen Morgan
Thre neural network that does those predictions does not use tree. It's a usual convolution network. Probabilities are used after the network produces them to do a monte carlo tree search (and the net is used on every monte carlo tree search iteration, tens or maybe thousands of hundreds times per second). The output of network is not used to make a move, but to eliminate unnecessary steps for monte carlo tree search. You said something else entirely.
Ethan Cruz
The games aren't learned by rote, parameters are inferred based on the games that have been viewed basically. It's the same as doing exercises before the math test to understand the underlying concepts, but not the same as taking a cheatsheet. At least for the main underlying system. The thing is, alphago is actually a collection of hacks upon hacks upon that system and it's nowhere near as clean and cool and nice as the media like to portray it. It's amazing in the sense that it was able to beat top humans at go, a never-before-accomplished feat and one that many, including me, thought wouldn't be possible in the next 50 years. It's not amazing from a technology perspective or from an innovation perspective however.
If you want to debate whether a model is memorizing or learning, there are several papers showing that various ML models do not memorize. There is also the recent fad, wasserstein GANs, which one group of people strongly believe is in fact memorizing shit, unlike usual ML models.