Skip to main content

Posts

Rei - Blog Post 7

Since the beginning of the semester, Matthew and I have been woking hard on the Replay Parser and Frame Collector. Both of these parts are crucial to the success of our project. With the first Demo of our project done also comes the completion of this first milestone. Parsing the replay exposed some interesting information to us about how inputs are recorded, and therefore how the game sees them. Our original understanding of inputs was that multiple inputs would actually be spread out across multiple frames, however this was not true. It turns out that a frame is followed by a list of events which take place on that frame. We also learned that some human actions translate into multiple inputs. For example: If you are using the control stick to move your character and you hold right on the control stick, your character moves right. One may think that this means the action RIGHT_PRESS is input, in actuality a list of actions is generated. The list may look something l...

Matthew - Blog Post 7

Since January, we've been working hard to not only finish writing the Replay Parser and Frame Collector but also totally synchronize them. I'm pleased to report our success. This is an amazing milestone for us because it means that we've surmounted one of our most troubling obstacles. I have also made sure to keep our documentation up to date. So, if you like, you can follow along with this blog post by replicating its results. The Frame Collector uses timed input sequences to start each replay associated with the currently running game version. Then, after waiting a set amount of time for playback to begin, it starts grabbing 1/4-scale frames at a rate of 10 frames per second. The Frame Collector takes these down-scaled frames, which are NumPy arrays, and rapidly pickles and dumps them into the file system. Here's a screenshot of the Frame Collector in action: If you look at the image above, you'll see that each pickle (the .np files) ...

Rei Blog Post 6

We are back after a long break, while Matthew was able to work on the project over break, much of my time was taken by my job. I then developed the flu, so the time I was able to work was much less then my partner's. Anyway, now that we are done with excuses, lets talk about what I have been working on. Now that I have recovered a bit more and am capable of working again, I have begun work on a replay parser. We have spoken about replays in the past however, I would like to take the time here to really go in depth about how replays store information. Rivals of Aether makes out job parsing replays much easier by storing replays in plain text format. With some help from the RoA community we have been able to break down what a lot of the symbols mean. I am going to go in a little more depth about what data we can extract from the file and why we care about it. Line One The first line of the file contains basic meta-data about the replay itself. The line is formatted as fol...

Matthew - Capstone Blog Post 6

First, let's briefly cover what happened over the break. I spent most of my time working at my job, but managed to read most of Michael Nielsen's textbook, Neural Networks and Deep Learning . I also read much of the documentation for both Keras and SerpentAI and studied some of the latter's source code. Overall, I feel as though I have a much better understanding of A) neural networks, and B) our frameworks. Additionally, I have started participating on the Discord servers for SerpentAI and Rivals of Aether. Both communities have proven to be of great help in their respective domains of expertise. Next, I must report some unfortunate news. Some 222 replay files out of our set are unusable because they are from early versions of the game that do not support replay playback. Still, this leaves us with 798 perfectly fine replays; I believe we have more than enough. Since the beginning of the semester, I have taken steps towards organizing and structuring our project . I...

Rei - Blog Post 4 & 5

Due to external factors, I accidentally missed the last Blog Post, however I shall make up for it by writing two posts in one! The last couple of weeks have been at least a little productive. Blog Post 4 Over these weeks, Matthew and I were both working on solidifying and exploring the context portion of our project. Of course, we have been exploring the context since nearly the beginning of the year, but because of both capstone classes have asked us to write something about the context of our paper. The context of our project can be found throughout our previous blog post, but a short and summarized version of out context section boils down to a couple main points.  Neural Networks are not too present in video games. When neural nets are present, they are used for research or marketing. As for the rest of our Design Document, unfortunately we rushed it a bit. We got caught up in some upsetting events these weeks and allowed our work to suffer instead of staying on t...

Matthew - Capstone Blog Post 5

Soliciting data from the community It's official. We shilly-shallied about it for months, but now we have finally settled on Rivals of Aether as our training platform. On November 25th, I made a thread on r/RivalsOfAether titled Looking for replay files to use in machine learning research . I honestly was not sure what kind of response to expect. I had only learned about RoA 's existence, I would estimate, sometime around mid-October. Rei pitched it to me several times as a viable alternative to Doom and Quake for machine learning, citing its ability to record input data from matches in plaintext. They even bought me a copy towards the end of October, which featured in my blog post about setting up SerpentAI. The plaintext replay files are certainly an attractive prospect when compared to the binary demo files found in id shooters. Furthermore, the game itself is stylish and fun. I mean, just look at Orcane! Source Nonetheless, I was wary of the idea of using it as...

Matthew - Capstone Blog Post 4

Finally, our CSI-480 (Advanced Topics: AI) course material is catching up to where we need to be. We are covering perceptrons and sigmoid neurons in the lectures, and we are also using TensorFlow to solve some very simple introductory problems (via tutorials). To supplement this I have been reading Neural Networks and Deep Learning by Michael Nielsen, a textbook available for free on the internet , which dives into neural networks right from the first chapter. Additionally, I have been finding 3Blue1Brown's multi-part video series about deep learning to be extremely helpful for visualizing some of the more advanced concepts. Even if I do not fully understand the calculus and linear algebra involved, at the very least I have a better idea of what goes on inside of neural networks. For example: I know what loss and gradient descent algorithms do, essentially, and I also understand how the latter helps find a local minimum for the former, but I do not necessarily feel confid...