Artificial Intelligence General - BACK WITH A VENGEANCE

Artificial Intelligence General - BACK WITH A VENGEANCE

Fun Video
youtube.com/watch?v=qv6UVOQ0F44

Now with Discord server.
discord.gg/010Hh6pWkFkahZ8yM

Come and discuss, learn about, and panic over all things artificial and (marginally) intelligent.

Links:

>Deep learning frameworks

tensorflow.org/ (google)
caffe.berkeleyvision.org/ (Berkeley Vision and Learning Center)
deeplearning.net/software/theano/ (Université de Montréal)
cntk.ai/ (Microsoft)
torch.ch/

>Frameworks comparison

en.wikipedia.org/wiki/Comparison_of_deep_learning_software

>Required reading

deeplearningbook.org/

>Intro videos for beginners:

youtube.com/channel/UC9OeZkIwhzfv-_Cb7fCikLQ/videos

>Required watching

youtube.com/watch?v=bxe2T-V8XRs&list=PL77aoaxdgEVDrHoFOMKTjDdsa0p9iVtsR

>Something else that you should probably watch

youtube.com/watch?v=S_6SU2djoAU&list=PLy4Cxv8UM-bXrPT9-ay4E1MuDj1KFTg9H

youtube.com/watch?v=6oe1Tmg9rjM

>Books

magnet:?xt=urn:btih:a680ca553dd69a6a5e8b7dc8c684ceb006c7ecc5&dn=Artificial%20Intelligence

>Other Resources

facedetection.com/
opencv.org/
kaldi-asr.org/
sirius.clarity-lab.org/

How are those networks comin' along?

Other urls found in this thread:

youtube.com/watch?v=qv6UVOQ0F44
tutorialspoint.com/artificial_intelligence/artificial_intelligence_popular_search_algorithms.htm
youtube.com/watch?v=UzxYlbK2c7E
youtube.com/watch?v=TjZBTDzGeGg
en.wikipedia.org/wiki/Hopfield_network
codecogs.com/latex/eqneditor.php
youtube.com/watch?v=n_PylKovg5k
aigamedev.com/open/interview/mario-ai/
tasvideos.org/5076S.html
blog.otoro.net/2016/05/07/backprop-neat/
eplex.cs.ucf.edu/ESHyperNEAT/
twitter.com/NSFWRedditGif

Doing a IT project with caffe. We have to create tasks for students that never worked with neural networks before.

For my Bachelor I will work with M$ new network CNTK. checking out which ways of learning with that toolkit are able to produce the best results with word prediction. I don't know which texts to chose though. There is a bible bot on twitter for example that randomly generates a new verse every few hours. I plan on doing the same thing with my network, just a different topic.

bump

Info on your pic related? Looks interesting and I would like to know more about mofifying raw input

did you watch the video?

No.

> Fun Video
> youtube.com/watch?v=qv6UVOQ0F44

Who here RoboCup?

What the fuck are Mario's inputs?

See

Please post a link to begin learning this shit from scratch, i'm not sure i'll be able to begin with a paper talking about neuron evolution

Is there any tutorial for dumb people ?

>learning this shit from scratch
AI in general or neural networks?

Some of the two, but it think it would be more appropriate to begin with ai in general

you know he did that
but did that AI start from 0 in each new level? Seems like it should continue

>but it think it would be more appropriate to begin with ai in general
Reasonable thinking.

Search algorithms are usually a starting point
tutorialspoint.com/artificial_intelligence/artificial_intelligence_popular_search_algorithms.htm

Stanford has lessons on YouTube
youtube.com/watch?v=UzxYlbK2c7E
And so does MIT
youtube.com/watch?v=TjZBTDzGeGg

I'd give that a go. If nothing else it will give you a general idea of the field so you know what to google in future.

whens waifus?

The inputs were the simplified level in the top left. Enemies were represented with just a black box. It would have worked better if it used the original screen as the input, but training would have went from 1 day to weeks or months for good results. For some of the objects, they don't even show up. I don't think the balls thrown by the sport things show up in the inputs. The checkpoint doesn't either.

Saved your post for later reading when I am bothered to learn machine learning.

Hmm

I agree.

The ball shows up, it's a black box.

shit thread

You are shit

How can I update a NN without the backpropogation bullshit?

>Without backpropogation
Use 1 layer.

can someone please seed them books

You don't update it

ok. Like how does an actual brain make and remove connections? I have no idea how to update the weights on these nodes.

For every output there's an error, so for each error you go back to each neuron of the previous layer and change the weight according to the learning rate and error. Then you do the same for each previous layer.

lad...

My knowledge on the matter is pretty superficial and I have no idea what that picture even is, but know error backpropagation is not something that happens biologically. Atleast not in the same way as basic MLPs.

>>/mlp/

Isn't that just brute forcing the game?

Maybe if you're a retard that doesn't know what "brute forcing" means.

It uses an evolutionary algorithm.

Brute forcing would be randomized input and opening a ton of instances or speeding up the game an enormous amount.
And then saving the best run

No, but like all university level machine learning projects it's an incredibly slow, inefficient way to achieve a "complete" result from scratch that inevitably winds up with the bot doing stupid shit over and over yet still "winning" because the fitness function is overly simplistic. They aren't brute forcing the game, as in trying every input combination at every frame until they find a path that works, but they ARE brute forcing some pre-programmed mutators. So the generation that favoured moving right failed, lets mutate it with the generation that stood still and jumped so now we have one that just walks right and jumps over and over. Holy crap that made it 25% of the way through. Merge with the one that crouches constantly... 2 mins and no progress, fail, merge with the one that taps Y... 30%... holds Y in spurts, 50%... They usually fervently deny that's what they are doing, but it's just because they've written such incredibly convoluted code that they've confused themselves into thinking they've made some important insight into "machine learning."

Oh, an AI thread, neat. I have nothing to add, but I've always wanted to get into it. Thanks for the links.

Not really. It won't decide "let's now jump over and over", it does input based on the blocks around the character, assigns weights to the types of blocks and their relative position.

this is still simply a more convoluted brute forcing,

I can't find the previous thread right now, so I'll ask it again.

What's your opinion on Expert Systems?
I've got to study them for my university AI course, and I'm going to go deep into CLIPS.
Is there still room left for improvements in rule-based systems or is everything just >muh deep NN now?

>Fun Video

> wasting all this time effort and apparently talent on a fucking video game...

>thinking it's a colossal effort of genius to make a basic NN
It might be, for some people, I guess.

Yes. Also known as evolution

How does the NN deal with varying quantities of inputs?

It's not, but he specifically said he was researching a scholarly paper and implemented the genetic algorithms himself.

Time, effort, and apparently some talent.

>convoluted brute forcing

Sure, just like AlphaGo

again, maybe for some people. But that's not an uncommon thing, you know.

Any teenager can implement an ANN. It's not that hard. Using a game is a fun way to learn it. I learned how to do it with boring shit like malignant/benign mole stats

You can kill usrelf honey! ;^)

I'm trying to make an ANN that's more similar to a natual NN. Not like "layers" of neurons that only connect to the next layer, but a large blob of neurons that can be connected in whichever way to each other. I have signal propagation relatively complete, I just don't know how to train the network.

Each grid space is probably an input 0/1 for an enemy, and maybe another grid of 0/1 as a set of inputs for standing positions? There are also techniques to use 0,1,2 for enemy/standing tiles/empty, but from what I've seen it doesn't work as well as discrete inputs unless you do significant normalization or preprocessing.

Do you mean a hopsfield network?

en.wikipedia.org/wiki/Hopfield_network

OP here.

Starting on thursday we'll start going over some of the learning material relevant to machine learning and try to increase our knowledge and skills together. We'll meet once a week from 5PM PST to 7PM PST.

For this first lesson i'm thinking that a review of linear algebra is in order so i've taken the liberty of finding some videos and books to help us.

If you're interested, you know.

I've got exams coming so i will probably catch up and lurk around later this summer, it would be cool if you can keep a pastebin or something with materials etc.

Yeah i'll start compiling one.

MIT OCW has good linear algebra material

Also for quick math stuff, codecogs.com/latex/eqneditor.php this is good. Default though is transparent bg gif which looks kind of bad, so it's best to change that.

Oh btw, forgot to add.

This is going to be on the discord server so that we can all just talk and communicate easily. It'll also give that a reason to exist.

I know that's what I was planning to use.

>Do you mean a hopsfield network?
That looks somewhat similar to the network I set up. I'll check to see how it gets trained and see if I can apply it to mine.

Fuck off Seth. Every single person I've shown your videos to points out how annoying your fucking face is for some reason. Within the first 10 minutes.

No, this is bruteforcing the game: youtube.com/watch?v=n_PylKovg5k

>Sup Forums fixates on NNs and shits on "university book lernt projects"
Typical.
NN is just weighted functions, stuffed into a black box called a layer. Come back when you've done something beyond watching some pop sci YouTube vids

> using NN for a classification problem
> falling for the meme this hard

Sadly this is more hill climbing to brute force the level.

See aigamedev.com/open/interview/mario-ai/ for an actual Mario AI

I don't understand what the fuck is going on in that video.

Does that learn how to play the game or just how to beat that level?

Explanation here: tasvideos.org/5076S.html
>tl;dr
After the initial setup (which takes 01:33.26) the game has been reprogrammed to brute force the final bowser fight by trying every single possible combination of input for 1 frame then 2 frames then 3 frames then 4 frames...
The game was reprogrammed to reset state back to start of fight between each try and it will detect when the game is completed and eventually stop.

The hexadecimal you see over sprites is external and is there to help the runner get it setup by exposing the internal state of the game in a visual way. It's left in the video so you can see what's going on. You could play this TAS on an actual console and you wouldn't see that but it would bruteforce the game eventually.

He just wants to make a name for himself.

I love linear algebra, so I'll be around here, but not in the discord, sorry. The MIT OCW site for the linear algebra course has problems and solutions and exams and stuff, too.

>
I'm trying to make an ANN that's more similar to a natual NN. Not like "layers" of neurons that only connect to the next layer, but a large blob of neurons that can be connected in whichever way to each other. I have signal propagation relatively complete, I just don't know how to train the network.


This is possible, look at

blog.otoro.net/2016/05/07/backprop-neat/
and eplex.cs.ucf.edu/ESHyperNEAT/

When you build them, user!

You can evolve it or apply any other stochastic algorithm (e.g. simulated annealing). It will be slow, but it works.

1) There is still no evidence that there is NO backprop in the brain
2) Backprop is a good optimizer, it shows good results for training deep neural networks.
3) Features learned with backprop are similar to biological features (google scholar it).

Stupid memer.

>Is there still room left for improvements in rule-based systems or is everything just >muh deep NN now?

If you can make your expert systems learn then sure, it will be an improvement. No working learning algorithms is a main disadvantage of rule based systems.

All of machine learning is hill climbing

Evolution generally is brute-forcing. If you have the time to spare, you are basically guaranteed a solution for your fitness function so it's not like it doesn't have it's place in AI

Evolution is actually pretty fast compared to more accurate algorithms, like simplex

Hill climbing as in unnecessary abstractions

That depends on the complexity of the problem. Testing the fitness in time-based scenarios you are capped by the amount of real time it takes to test the system if you're running it in for instance a physics simulation. So testing the inefficiency of flood gate designs is going to take as long as it takes to simulate fluid dynamics for each spawn, for each generation over the lifetime of evolution. There are other ways to solve such problems of course but evolution isn't always fast

Bump

FUCK YOU I SAID BUMP!

nice that this thread still lives.

so what the "solution" was in the end is just jumping and spinning all the time. someone build them a AI developer AI pls