Anyone here into AI or machine learning...

Anyone here into AI or machine learning? I found out that you can interface with MAME using LUA and I think LUA can interface with Python. Time to build the ultimate Tekken playing AI using Reinforcement Learning! I am trying to see if a single neural network can generalize in between playing different Tekkens. I will be emulating 1,2,3 and Tag on MAME in succession or in random order. I am planning to use a Deep Q Neural Network which, every time step, will take as input a tensor with values 1. What game is being played (1-4), 2.Your HP (0-100), 3.Opponents HP (0-100), 4. Time Remaining (0-99 seconds) and 5. a flattened array of the rgb values of each pixel rendered (0-255) and will output 1. direction of analog stick (1-center, 2-left, 3-left/up, 4-up, 5-right/up, 6 - right, 7-right/down, 8-down, 9-left/down) and 2. up to two buttons pressed together (1-A,2-B,3-C,4-D,5-AB,6-AC,7-AD,8-BC, 9-BD,10-CD). I will program the neural network using Python with Tensorflow, although I might consider Keras if it makes life easier. I'll have it run at 10 FPS (so each second will have 10 timesteps, 10 decisions being taken). Now, I'm on the hunt for memory cells that might show useful info such as score, hp, time. Using the LUA interface, you can get the value of cpu register D0 using cpu.state["D0"].value or memory cell 0xC000 using mem:read_i8(0xC000). It's around those memory addresses that I see activity (non-0 values). But are there too many memory locations to search manually for anything meaningful? I'm also still a nab both in AI and in programming, so if you recommend doing it some other way, tell me. I'll probably face many problems but I can google as it goes.

Attached: Tekken3gameplay.png (639x478, 391K)

Other urls found in this thread:

youtube.com/watch?v=tUcf5-FTPUY
github.com/hughperkins/tf-coriander
gvgai.net/
twitter.com/SFWRedditVideos

For Rewards, I think setting +100 for Game Won, -100 for Game Lost and -1 for every second elapsed.

interesting
by that logic losing immediately is better than losing after 90 seconds

how do you think I should set the rewards?

the same but if you lose score is added for every second

i have no fucking clue i dont know ai an shiet, just thought losing instantly is probably worse than after a long and intense fight

Yeah. You never know what the AI will learn.

bump
Any help appreciated with the memory cell locations

Attached: 1520697139931.gif (768x432, 1.55M)

I've already searched if there are well-known memory cell locations for the Tekken games but it doesn't seem anyone has tried what I'm trying, which makes it exciting but harder

Is this what you're referring too? This is kinda relevant to your interests I think

youtube.com/watch?v=tUcf5-FTPUY

do you need a logo?

Uhh sure, I mean, if you are willing to do it for free and I can credit you. I am horrible in drawing and such. I am a broke student though, don't make my own money yet so can't pay for anything.

I'll make the logo

contact details?

I am considering of streaming the AI on twitch as it learns to play Tekken. A logo would be nice.

emulate it with a known good emulator and find the addresses yourself, that's what i've done in the past. it's not hard if you know what makes a value change or the number of the value.

[email protected]

imagine being this innocent

I will credit you if I end up achieving this and get it working

Fairly poor implementation. The author needs to create a randomized moveset and get the best practices over time from frame time. It also needs to calculate the damage done v each move.

sounds like it'll take forever and a half to train.

Do you think it's a bad idea?
I'd have to leave my main PC with the juicy GPU on all the time and I'll have to stop the streaming if I have to use it or play games. I was looking if I could use a dedicated Raspberry Pi 3 to do the 24/7 playing and streaming but it doesn't have nearly enough juice to run my neural network

that's why I want to stream it on Twitch. People will be able to see the AI learn to play slowly and over a large amount of time. Is there a problem?

what is this exactly? is it a neural network? did it learn using supervised or reinforcement learning? Tekken 7 is an HD game, that would take a huge neural network needing a supercomputer. I suppose there is down sampling of the image.

Also, I know Tensorflow only supports Nvidia and CUDA. I have an AMD RX480 which uses OpenCL. Does Keras support it? I have seen PlaidML supports it and is compatible with all Keras code but it only looks compatible with Linux. I am running Winblows on my main PC. Not really in the mood of installing an OS

I've sent you an email.

i think it's a pretty cool idea. make sure you post updates!

github.com/hughperkins/tf-coriander

Thank you. I will. If you wanna lurk in my twitch channel until it's ready to be streamed:
themainframefox

>github.com/hughperkins/tf-coriander
I think Tensorflow is gonna be a lot of hussle. I think I'm going with Keras. I managed to install PlaidML with pip from within the Anaconda environment in Windows. Not sure if it's gonna work, but it hasn't complained, although it only has support information for Linux.

Why would he be using visual input to make decisions? You can use debuggers to get key information from the game's memory itself: life bars, opponent inputs, opponent positioning etc

I would be using feeding the neural network a screen capture every timestep as a flattened array of rgb or grayscale values, will probably convert image to grayscale to reduce neural network complexity. It will help the nn if it gets additional info like the score, the hp and the time, also what game is being played and what characters are in the fight.

How do you use debugger with MAME? I guess I have to compile it from source and I just have the binary.

i don't want to sound mean or discouraging, but it feels to me like you're gonna realize down the way that this is going to be a lot harder than what you expect it to be. if you end up not following through with this, don't get too discouraged. just keep a positive attitude and try to find a new project

I am expecting to find problems, bugs and performance issues. That's expected. I also expect to get stuck a lot debugging, as LUA and Python are new to me. That's a reason I am looking for people that'd like to get involved and have more practical programming expertise. But yeah, nobody has done a multi-Tekken AI, as far as I can search, so I don't have anyone else's footsteps to follow.

If you have suggestions, feel free to drop them btw

Gonna go ahead and say that you won't succeed. Even Google isn't able to teach AI to play 3d games like this. Pick some easy 2d platformer like mario instead and save yourself a whole lot of trouble.

It's worth trying, I think. I'm really interested in Tekken. Anyway, Tekken games are 3D but the early ones don't use the third dimension much, they play more like 2D fighting games. As Tekken evolves (Tekken 3 and TAG), it makes more use of the 3D space in the controls, with side-stepping and what not. I want to see if the same neural net can learn to play different Tekken games and adapt. I will start by giving it the same character all the time at the start (Kazuya) until it learns how to play at a good level and then start mixing characters more. I will downsample the image to grayscale to reduce the complexity of the neural net so I don't run in computation problems. I am going to build the neural net with Keras, which does most of the world for you. I just need to find the memory cells for the other input info I need (hp, scores, time remaining, characters in fight, game being played). I expect it to be challenging but not super hard, but I don't have a lot of experience.

Also, Mario is overdone in AI research. I wanna do something original.

I'm just saying that if this is your first project, then you should go with a game that someone has been able to solve with neural networks in the past. If you're able to replicate the results yourself, then you could move onto more complex games with confidence.

I have a little bit of experience in reinforcement learning with GVGAI: gvgai.net/
I am not the most experienced AI programmer but I think I have a good foundation to build a project on and I want to do something exciting. I can Google problems as much as I need to for it to work and I am looking to collaborate with other people who might be more experienced.

I also have patience and I'll keep trying to make something work for a long time and ask about a problem everywhere before I give up

Well, good luck then. I hope you succeed.

Thanks user :)
I hope so too. If not, I can always pick the project back up again when I am more experienced but I have about a month of holidays from uni and wanted to use them to make something interesting.