Skip to main content

Matthew - Capstone Blog Post 6


First, let's briefly cover what happened over the break. I spent most of my time working at my job, but managed to read most of Michael Nielsen's textbook, Neural Networks and Deep Learning. I also read much of the documentation for both Keras and SerpentAI and studied some of the latter's source code. Overall, I feel as though I have a much better understanding of A) neural networks, and B) our frameworks. Additionally, I have started participating on the Discord servers for SerpentAI and Rivals of Aether. Both communities have proven to be of great help in their respective domains of expertise.

Next, I must report some unfortunate news. Some 222 replay files out of our set are unusable because they are from early versions of the game that do not support replay playback. Still, this leaves us with 798 perfectly fine replays; I believe we have more than enough.

Since the beginning of the semester, I have taken steps towards organizing and structuring our project. I wrote an ROA version sorter which looks inside of your replays folder (usually found in AppData/local/RivalsofAether/replays), reads each replay, creates a folder for each game version that has a replay, and then moves all of the replays into the correct folder. The resulting file structure is supposed to look something like this:

Incidentally, all of the folders that start with "00" represent versions of the game with which we are unable to use replays.

The project now uses a configuration file called roa.ini to store the path to the replays folder, because this path can vary depending on your username (e.g. in my case it's a subdirectoy of C:\Users\matth). This allows our scripts and plugins to access the replays folder without any annoying handholding like launch arguments or hardcoded constants.

Well, I should say that it does so mostly without that sort of annoying handholding. There is one symbolic directory link that is absolutely essential:



The symbolic link is located inside of the SerpentAI installation location and it goes to the plugins folder found in the repository. This allows us to run the plugins with SerpentAI without having to copy the plugins from the repository to the SerpentAI installation location. You could try manually copying the plugins if you don't want to set up a symlink, but I would recommend against doing so because the plugins will try to step back through the symlink to find the configuration file. This step back will crash if a symlink is not found at the expected location.

Otherwise, I have started writing a helper module called replay_management. This contains a replay manager class which keeps track of all of the replay files. The goal of this replay manager is to be able to organize different batches and also ask for individual replays. Right now the manager's set up so that it is able to read all of the replay folders (seen above) and all of the replay files located therein. The module also contains some useful enums for stages and characters. Each stage and character is now associated (via enumeration) with the number that represents it in a replay file. For example, Ranno is character 11 and Blazing Hideout is stage 07.

The problem I am currently working on is how to split the replay files into different sets for training, testing, and validating the neural network; the manager must be able to do this through random sampling according to some specified distribution (e.g. 0.6, 0.2, 0.2). Furthermore, I have to come up with a better class hierarchy for managing batches because the current one is kind of a mess.

Lastly, I would like to draw attention to one problem that is given me conniptions. Do we have the neural network train by watching replays in real time? This could be slow and inconsistent. Alternatively, should we try to use a library like mss or perhaps a secondary game agent with SerpentAI (which uses mss) to capture the specific frames we want for each replay file? If we go with the latter route, training will be much faster. This wasn't our original plan, however, so we will need to discuss this internally as a team.

I would say that our greatest obstacle right now is matching frames from the game with input data from the replay files. I think we will be out of the woods once we can solve this.
 

https://pa1.narvii.com/5692/db17de5e5db99188681fa3a73f07c833cfb9e29d_hq.gif

Comments

Popular posts from this blog

Rei - Blog Post 10

So,  I missed blog post 9. This is me acknowledging that for consistency. Anyway, the past couple of weeks have been incredibly productive for ContentsMayBeHot. Matthew has finished collecting all the replay data, we have refactored our project to reduce complexity, we have improved the runtime of our code, and finally we have started seriously training our model. The Changes Matthew implemented multi-threading for the model loading. Which reduced our load time between files from about 3-5 Seconds to 1 Second or less. Which allows us to fully train a model in much less time! While Matthew did this I reduced the code duplication in our project. This way, if we needed to change how we loaded our training data, we didn't have to change it in multiple places. This just allows us to make hot-fixes much more efficiently. We also started working on some unittests for our project using pytest. These tests were written because of a requirement for another class, but we thought it

Matthew - Blog Post 8

replaymanager.py is only 339 lines long but it feels at least three times that when I'm working on it. The module is definitely due for a refactor. For one, the term subdataset should be renamed to version_set and extracted into a class. version_set more accurately and describes what it is, and the class extraction would clean up the namespace in ReplayManager. There is probably some kind of class extraction possible for replays, so that their names, paths, file streams, and Twitter profiles can all be neatly encapsulated, thereby cleaning up the namespace even more. However, I do not want to worry about having two kinds of replays: the one used by ReplayManager and the one used by ReplayParser. Even though ReplayManager does not use ReplayParser, the prospect of making things more muddier deters me. There's probably a right way to refactor this code, but, to put it simply, now is not the time. Speaking of time, I came up with a great way to get work done, even when I am slee

Matthew - Blog Post 7

Since January, we've been working hard to not only finish writing the Replay Parser and Frame Collector but also totally synchronize them. I'm pleased to report our success. This is an amazing milestone for us because it means that we've surmounted one of our most troubling obstacles. I have also made sure to keep our documentation up to date. So, if you like, you can follow along with this blog post by replicating its results. The Frame Collector uses timed input sequences to start each replay associated with the currently running game version. Then, after waiting a set amount of time for playback to begin, it starts grabbing 1/4-scale frames at a rate of 10 frames per second. The Frame Collector takes these down-scaled frames, which are NumPy arrays, and rapidly pickles and dumps them into the file system. Here's a screenshot of the Frame Collector in action: If you look at the image above, you'll see that each pickle (the .np files)