Skip to main content

Matthew - Capstone Blog Post 5

Soliciting data from the community

It's official. We shilly-shallied about it for months, but now we have finally settled on Rivals of Aether as our training platform. On November 25th, I made a thread on r/RivalsOfAether titled Looking for replay files to use in machine learning research. I honestly was not sure what kind of response to expect. I had only learned about RoA's existence, I would estimate, sometime around mid-October. Rei pitched it to me several times as a viable alternative to Doom and Quake for machine learning, citing its ability to record input data from matches in plaintext. They even bought me a copy towards the end of October, which featured in my blog post about setting up SerpentAI.

The plaintext replay files are certainly an attractive prospect when compared to the binary demo files found in id shooters. Furthermore, the game itself is stylish and fun. I mean, just look at Orcane!

Nonetheless, I was wary of the idea of using it as our training platform due to a lack of readily-available data. Quake has online repositories like Speed Demo Archives and Quake Terminus. Clearly it is the only choice we have? Besides, wasn't the original pitch to use an FPS, and the original dream to make a UT4 bot?

Well, in less than 24 hours after I posted on the RoA subreddit, the community gave us 893 replay files featuring matches from a variety of settings, including tournaments, exhibitions, and ranked matches. This dataset makes a world of difference, not just to the practicality of our project but also to my outlook. In retrospect, I have come to realize that I simply like Rivals more than Quake/Doom; I just needed to know that it could be a viable platform before I could let myself get attached to it.

As of today (11/27), we have 1,032 replays. I honestly couldn't be happier. I just want to do a good job so that we can make the Rivals community proud.

Regarding feedback on the design document

We received feedback on our design document draft. Apart from fixing our citations and, you know, actually compiling our LaTeX into PDF, we mainly need to improve our literature and methodology write-ups. I already have a two-birds-one-stone styled plan for this.

We will do another pass over all of our academic sources, filling out templates I designed to guide us for both of the aforementioned sections. The templates are meant to serve as an organized repository of information—kind of like a document database, except not elegant or efficient, but rather all crammed inside of a single WYSIWYG doc. I guess I will share that template here, but please be kind when you tear it to shreds:

Article title
Researcher names
What they are using machine learning to do.
What game or environment they are placing their AI into and what concrete tasks they give it.
List of libraries, software, and other tools used by the researchers.
Data Structures & Algorithms:
List of data structures and algorithms they use, and a brief description for each.
Data flow:
Quick explanation of how the aforementioned data structures and algorithms work together to produce the desired result, i.e. a textual data flow summary.
Whether or not the tests were successful.
Any research or experiments that are frequently mentioned. Note the title, research and context as follows: Title. Names. Context.

We should have approached our research in this neat and organized manner from the start. Oh well. We will just have to find time to reread everything. We will also have to find time to research the same problems we have already researched but for 2D. Also, we will need to properly assess Keras and fix our introduction section. Of course, it's easy to just list the things we have to do...


Popular posts from this blog

Matthew - Capstone Blog Post 1

First I would like to discuss our goals and long-term plans. We want to create an artificial intelligence that learns how to play first-person shooters without access to any game state data other than what can be gained through the audio and visual buffers. In other words, the AI will only have access to the information that a human player would. If we are successful with these constraints, then I could see our work leading to AI that can play any game, or even AI-driven robots that can play games using mechanical hands and eyes to operate normal peripherals and interfaces.

We are currently in the research and planning phase of the project. This means that our job right now is to decide exactly what tools and technologies we will use and how they will interact with one another. By the end of the semester, which will arrive sometime this December, we will need to be ready for phase two, which is the development, training, and testing phase. Yes, that is all three at once. However, if …

Rei - Capstone Blog Post 1

Over the past couple of weeks, Matthew and I have been trying to narrow down our idea for capstone. We have settled on a "Modular" AI that can play First Person Shooters or other similar video games. However, we decided to put a slight twist on the idea of an AI playing games. Most of the AIs that are currently out have more information than they should possibly have at that time, like the location of players. We decided that our AI would only have information that would be accessible to a human player. We also noticed that many of the "PlayerAIs" out there are reactionary, not planning. While reacting is a key part to many of these games, so is strategy. We want to create an AI that thinks, at least a little bit, about that actions it is making or should make.

Since narrowing down our topic we have split off and started looking at different existing technologies and research that could help us understand and create this project. I decided to look at some computer…

Matthew - Capstone Blog Post 4

Finally, our CSI-480 (Advanced Topics: AI) course material is catching up to where we need to be. We are covering perceptrons and sigmoid neurons in the lectures, and we are also using TensorFlow to solve some very simple introductory problems (via tutorials). To supplement this I have been reading Neural Networks and Deep Learning by Michael Nielsen, a textbook available for free on the internet, which dives into neural networks right from the first chapter. Additionally, I have been finding 3Blue1Brown's multi-part video series about deep learning to be extremely helpful for visualizing some of the more advanced concepts. Even if I do not fully understand the calculus and linear algebra involved, at the very least I have a better idea of what goes on inside of neural networks. For example: I know what loss and gradient descentalgorithmsdo, essentially, and I also understand how the latter helps find a local minimum for the former, but I do not necessarily feel confident in my …