Skip to main content

Matthew - Blog Post 9

After our last meeting, Professor Auerbach asked us to shift our focus towards building and training our model. So that's what we've been working on lately. The results so far have been interesting and problematic.

The first step was to define a minimal working model and a loading system to feed it our labelled data-set. I wrote a Sequence subclass, which is essentially a kind of generator designed for use with the fit_generator method. With fit_generator and a sequence, we're able to train and test the model with just a couple of one-liners:


The sequence subclass also has a few other tricks up its proverbial sleeve. For one, it reduces the dimensionality of the frame buffer data from 135×240×3 to 135×240×1 by converting it to gray-scale. This reduces the number of features from 97,200 to 32,400. For two, it does the same with the labels, combining and dropping 26 action types into just 9 atomic classes. This all happens very quickly in memory; the stored files are left alone in case we ever decide we need 26 classes or 3 color channels. As for our minimal working model: it was just a simple feed-forward neural network with an input layer, one hidden layer, and an output layer. It used RMSProp as its optimizer, categorical cross-entropy as its loss function, and accuracy for its metric. The output layer uses softmax for its activation function, resulting in decimal values between 0 and 1. I trained the model overnight on our currently processed dataset of 301 samples, where the average sample contained between 1000 and 2000 labelled frames.

After training the basic model, I wrote a quick game agent which loads the model and then feeds the it game frames in exchange for predictions. The predictions come as a list of 9 values from the softmax layer. The agent is pretty simple; it just tests each prediction class against an arbitrarily-defined threshold value. If the prediction value exceeds the threshold, then the button for the corresponding action is pressed; otherwise, that button is released. Some classes, such as left and right or up and down, are mutually exclusive. For example, if both left and right pass the threshold test, then the agent will choose the higher of the two.

So, what did the feed-forward neural network make game agent do? Well, it mostly held down and left and occasionally tried to jump. The result was a pacifist bot that appears to cower in the corner before jumping off of the stage to kill itself. I believe that this is a good first step.
Image credit

After confirming that the feed-forward model successfully trains, saves, loads, and predicts, Rei and I set to work on developing bigger and better models. So far we have created and trained an LSTM and a 2D convolutional network. However, neither of these have had great results in the game yet. I was unable to make the LSTM load, and the 2D convolutional network somehow managed to play even more poorly than the feedforward network. Furthermore, we have a serious resource problem. I am only able to train our advanced models on  CPU and RAM, because my 6 GB of VRAM is not nearly enough for me to run them on GPU. The convolutional network in particular utilized nearly all 32 GB of RAM on my computer as well as 100% of the processing power of my I7 6700. It also took around 8 hours to train.

The last major issue is that of creating our perfect dream-model. We are trying to combine the 2D convolutional network architecture with an LSTM. This means that our input shape needs an additional dimension: time. Currently, our input shape is batch size × image u × image v × image color channels, or n×135×240×1. The time dimension, which represents frames, must go between batch size and image u. However, to my knowledge only batch size can be variable, and our replays have all kinds of durations! Keras provides a TimeDistributed wrapper that we can use to encapsulate our Conv2D layers. Also, we will be looking into the ConvLSTM2D layer provided by Keras; it is a specialized type of layer that performs the exact task we are interested in. However, I am not sure how to we can define our input shape. We may have to overhaul our Sequence subclass so that it produces a fixed number of frames. Only additional research and experimentation will lead to an answer, though.


Popular posts from this blog

Rei - Capstone Blog Post 1

Over the past couple of weeks, Matthew and I have been trying to narrow down our idea for capstone. We have settled on a "Modular" AI that can play First Person Shooters or other similar video games. However, we decided to put a slight twist on the idea of an AI playing games. Most of the AIs that are currently out have more information than they should possibly have at that time, like the location of players. We decided that our AI would only have information that would be accessible to a human player. We also noticed that many of the "PlayerAIs" out there are reactionary, not planning. While reacting is a key part to many of these games, so is strategy. We want to create an AI that thinks, at least a little bit, about that actions it is making or should make.

Since narrowing down our topic we have split off and started looking at different existing technologies and research that could help us understand and create this project. I decided to look at some computer…

Matthew - Capstone Blog Post 4

Finally, our CSI-480 (Advanced Topics: AI) course material is catching up to where we need to be. We are covering perceptrons and sigmoid neurons in the lectures, and we are also using TensorFlow to solve some very simple introductory problems (via tutorials). To supplement this I have been reading Neural Networks and Deep Learning by Michael Nielsen, a textbook available for free on the internet, which dives into neural networks right from the first chapter. Additionally, I have been finding 3Blue1Brown's multi-part video series about deep learning to be extremely helpful for visualizing some of the more advanced concepts. Even if I do not fully understand the calculus and linear algebra involved, at the very least I have a better idea of what goes on inside of neural networks. For example: I know what loss and gradient descentalgorithmsdo, essentially, and I also understand how the latter helps find a local minimum for the former, but I do not necessarily feel confident in my …

Matthew - Capstone Blog Post 1

First I would like to discuss our goals and long-term plans. We want to create an artificial intelligence that learns how to play first-person shooters without access to any game state data other than what can be gained through the audio and visual buffers. In other words, the AI will only have access to the information that a human player would. If we are successful with these constraints, then I could see our work leading to AI that can play any game, or even AI-driven robots that can play games using mechanical hands and eyes to operate normal peripherals and interfaces.

We are currently in the research and planning phase of the project. This means that our job right now is to decide exactly what tools and technologies we will use and how they will interact with one another. By the end of the semester, which will arrive sometime this December, we will need to be ready for phase two, which is the development, training, and testing phase. Yes, that is all three at once. However, if …