Skip to main content

Rei - Capstone Blog Post 1


Over the past couple of weeks, Matthew and I have been trying to narrow down our idea for capstone. We have settled on a "Modular" AI that can play First Person Shooters or other similar video games. However, we decided to put a slight twist on the idea of an AI playing games. Most of the AIs that are currently out have more information than they should possibly have at that time, like the location of players. We decided that our AI would only have information that would be accessible to a human player. We also noticed that many of the "PlayerAIs" out there are reactionary, not planning. While reacting is a key part to many of these games, so is strategy. We want to create an AI that thinks, at least a little bit, about that actions it is making or should make.

Since narrowing down our topic we have split off and started looking at different existing technologies and research that could help us understand and create this project. I decided to look at some computer vision tactics, mainly Scene Segmentation.

Scene Segmentation is a sort of advanced form of Object Recognition that tries to label objects but also understand their boundaries by looking at a particular scene(2D). The image to the right was taken from here. Which is an implantation of SegNet. While higher confidence versions of scene segmentation do exist, I currently find Segnet the most useful because the source code is on the internet and can be found on GitHub.



Other implementations of Scene segmentation are very impressive such as the video included show the power that this system could have. It is reasonable to say that we could use this in our final product. 


I also started to look at the basics of audio triangulation and speech recognition. Since human players have access to this data, I thought it would be interesting to get the AI to behave based on sound. If the AI 'hears' another character it should react accordingly.

Comments

Popular posts from this blog

Matthew - Blog Post 9

After our last meeting, Professor Auerbach asked us to shift our focus towards building and training our model. So that's what we've been working on lately. The results so far have been interesting and problematic. The first step was to define a minimal working model and a loading system to feed it our labelled data-set. I wrote a Sequence subclass, which is essentially a kind of generator designed for use with the fit_generator method. With fit_generator and a sequence, we're able to train and test the model with just a couple of one-liners: model.fit_generator(sequence) model.evaluate_generator(sequence) The sequence subclass also has a few other tricks up its proverbial sleeve. For one, it reduces the dimensionality of the frame buffer data from 135×240×3 to 135×240×1 by converting it to gray-scale. This reduces the number of features from 97,200 to 32,400. For two, it does the same with the labels, combining and dropping 26 action types into just 9 atomic clas...

Matthew - Capstone Blog Post 1

First I would like to discuss our goals and long-term plans. We want to create an artificial intelligence that learns how to play first-person shooters without access to any game state data other than what can be gained through the audio and visual buffers. In other words, the AI will only have access to the information that a human player would. If we are successful with these constraints, then I could see our work leading to AI that can play any game, or even AI-driven robots that can play games using mechanical hands and eyes to operate normal peripherals and interfaces. We are currently in the research and planning phase of the project. This means that our job right now is to decide exactly what tools and technologies we will use and how they will interact with one another. By the end of the semester, which will arrive sometime this December, we will need to be ready for phase two, which is the development, training, and testing phase. Yes, that is all three at once. However, if...

Matthew - Blog Post 7

Since January, we've been working hard to not only finish writing the Replay Parser and Frame Collector but also totally synchronize them. I'm pleased to report our success. This is an amazing milestone for us because it means that we've surmounted one of our most troubling obstacles. I have also made sure to keep our documentation up to date. So, if you like, you can follow along with this blog post by replicating its results. The Frame Collector uses timed input sequences to start each replay associated with the currently running game version. Then, after waiting a set amount of time for playback to begin, it starts grabbing 1/4-scale frames at a rate of 10 frames per second. The Frame Collector takes these down-scaled frames, which are NumPy arrays, and rapidly pickles and dumps them into the file system. Here's a screenshot of the Frame Collector in action: If you look at the image above, you'll see that each pickle (the .np files) ...