Skip to main content

Matthew - Capstone Blog Post 6


First, let's briefly cover what happened over the break. I spent most of my time working at my job, but managed to read most of Michael Nielsen's textbook, Neural Networks and Deep Learning. I also read much of the documentation for both Keras and SerpentAI and studied some of the latter's source code. Overall, I feel as though I have a much better understanding of A) neural networks, and B) our frameworks. Additionally, I have started participating on the Discord servers for SerpentAI and Rivals of Aether. Both communities have proven to be of great help in their respective domains of expertise.

Next, I must report some unfortunate news. Some 222 replay files out of our set are unusable because they are from early versions of the game that do not support replay playback. Still, this leaves us with 798 perfectly fine replays; I believe we have more than enough.

Since the beginning of the semester, I have taken steps towards organizing and structuring our project. I wrote an ROA version sorter which looks inside of your replays folder (usually found in AppData/local/RivalsofAether/replays), reads each replay, creates a folder for each game version that has a replay, and then moves all of the replays into the correct folder. The resulting file structure is supposed to look something like this:

Incidentally, all of the folders that start with "00" represent versions of the game with which we are unable to use replays.

The project now uses a configuration file called roa.ini to store the path to the replays folder, because this path can vary depending on your username (e.g. in my case it's a subdirectoy of C:\Users\matth). This allows our scripts and plugins to access the replays folder without any annoying handholding like launch arguments or hardcoded constants.

Well, I should say that it does so mostly without that sort of annoying handholding. There is one symbolic directory link that is absolutely essential:



The symbolic link is located inside of the SerpentAI installation location and it goes to the plugins folder found in the repository. This allows us to run the plugins with SerpentAI without having to copy the plugins from the repository to the SerpentAI installation location. You could try manually copying the plugins if you don't want to set up a symlink, but I would recommend against doing so because the plugins will try to step back through the symlink to find the configuration file. This step back will crash if a symlink is not found at the expected location.

Otherwise, I have started writing a helper module called replay_management. This contains a replay manager class which keeps track of all of the replay files. The goal of this replay manager is to be able to organize different batches and also ask for individual replays. Right now the manager's set up so that it is able to read all of the replay folders (seen above) and all of the replay files located therein. The module also contains some useful enums for stages and characters. Each stage and character is now associated (via enumeration) with the number that represents it in a replay file. For example, Ranno is character 11 and Blazing Hideout is stage 07.

The problem I am currently working on is how to split the replay files into different sets for training, testing, and validating the neural network; the manager must be able to do this through random sampling according to some specified distribution (e.g. 0.6, 0.2, 0.2). Furthermore, I have to come up with a better class hierarchy for managing batches because the current one is kind of a mess.

Lastly, I would like to draw attention to one problem that is given me conniptions. Do we have the neural network train by watching replays in real time? This could be slow and inconsistent. Alternatively, should we try to use a library like mss or perhaps a secondary game agent with SerpentAI (which uses mss) to capture the specific frames we want for each replay file? If we go with the latter route, training will be much faster. This wasn't our original plan, however, so we will need to discuss this internally as a team.

I would say that our greatest obstacle right now is matching frames from the game with input data from the replay files. I think we will be out of the woods once we can solve this.
 

https://pa1.narvii.com/5692/db17de5e5db99188681fa3a73f07c833cfb9e29d_hq.gif

Comments

Popular posts from this blog

Matthew - Capstone Blog Post 1

First I would like to discuss our goals and long-term plans. We want to create an artificial intelligence that learns how to play first-person shooters without access to any game state data other than what can be gained through the audio and visual buffers. In other words, the AI will only have access to the information that a human player would. If we are successful with these constraints, then I could see our work leading to AI that can play any game, or even AI-driven robots that can play games using mechanical hands and eyes to operate normal peripherals and interfaces. We are currently in the research and planning phase of the project. This means that our job right now is to decide exactly what tools and technologies we will use and how they will interact with one another. By the end of the semester, which will arrive sometime this December, we will need to be ready for phase two, which is the development, training, and testing phase. Yes, that is all three at once. However, if...

Matthew - Blog Post 9

After our last meeting, Professor Auerbach asked us to shift our focus towards building and training our model. So that's what we've been working on lately. The results so far have been interesting and problematic. The first step was to define a minimal working model and a loading system to feed it our labelled data-set. I wrote a Sequence subclass, which is essentially a kind of generator designed for use with the fit_generator method. With fit_generator and a sequence, we're able to train and test the model with just a couple of one-liners: model.fit_generator(sequence) model.evaluate_generator(sequence) The sequence subclass also has a few other tricks up its proverbial sleeve. For one, it reduces the dimensionality of the frame buffer data from 135×240×3 to 135×240×1 by converting it to gray-scale. This reduces the number of features from 97,200 to 32,400. For two, it does the same with the labels, combining and dropping 26 action types into just 9 atomic clas...

Matthew - Blog Post 7

Since January, we've been working hard to not only finish writing the Replay Parser and Frame Collector but also totally synchronize them. I'm pleased to report our success. This is an amazing milestone for us because it means that we've surmounted one of our most troubling obstacles. I have also made sure to keep our documentation up to date. So, if you like, you can follow along with this blog post by replicating its results. The Frame Collector uses timed input sequences to start each replay associated with the currently running game version. Then, after waiting a set amount of time for playback to begin, it starts grabbing 1/4-scale frames at a rate of 10 frames per second. The Frame Collector takes these down-scaled frames, which are NumPy arrays, and rapidly pickles and dumps them into the file system. Here's a screenshot of the Frame Collector in action: If you look at the image above, you'll see that each pickle (the .np files) ...