Skip to main content

Posts

Showing posts with the label Rei

Rei - Blog Post 10

So,  I missed blog post 9. This is me acknowledging that for consistency. Anyway, the past couple of weeks have been incredibly productive for ContentsMayBeHot. Matthew has finished collecting all the replay data, we have refactored our project to reduce complexity, we have improved the runtime of our code, and finally we have started seriously training our model. The Changes Matthew implemented multi-threading for the model loading. Which reduced our load time between files from about 3-5 Seconds to 1 Second or less. Which allows us to fully train a model in much less time! While Matthew did this I reduced the code duplication in our project. This way, if we needed to change how we loaded our training data, we didn't have to change it in multiple places. This just allows us to make hot-fixes much more efficiently. We also started working on some unittests for our project using pytest. These tests were written because of a requirement for another class, but we thought it...

Rei - Blog Post 8

This most recent work period involved a lot of refactoring and adding some new key functionality. Matthew asked me to create a simplified Action Type in addition to the one that was all in place, basically just the same actions without PRESSED and RELEASED. Since we still wanted the original structure to be there, all I had to do was cast the "complex" actions to "simple actions. Matthew then asked if I could convert that SimpleAction type into a matrix, so we could have a clearly defined Y. This was also incredibly easy. I am actually quite happy with how it works as well. All you have to do to create an array for the action is two steps! matrix = numpy.zeros(26) if action is not SimpleAction.INVALID:     matrix[action] = 1; The 26 is the number of different Simple Actions we have. Then, to make it so we can run the parser separately from the Agent, I made it the replay can output numpy files for each character where each row in the file contains the frame of...

Rei - Blog Post 7

Since the beginning of the semester, Matthew and I have been woking hard on the Replay Parser and Frame Collector. Both of these parts are crucial to the success of our project. With the first Demo of our project done also comes the completion of this first milestone. Parsing the replay exposed some interesting information to us about how inputs are recorded, and therefore how the game sees them. Our original understanding of inputs was that multiple inputs would actually be spread out across multiple frames, however this was not true. It turns out that a frame is followed by a list of events which take place on that frame. We also learned that some human actions translate into multiple inputs. For example: If you are using the control stick to move your character and you hold right on the control stick, your character moves right. One may think that this means the action RIGHT_PRESS is input, in actuality a list of actions is generated. The list may look something l...

Rei Blog Post 6

We are back after a long break, while Matthew was able to work on the project over break, much of my time was taken by my job. I then developed the flu, so the time I was able to work was much less then my partner's. Anyway, now that we are done with excuses, lets talk about what I have been working on. Now that I have recovered a bit more and am capable of working again, I have begun work on a replay parser. We have spoken about replays in the past however, I would like to take the time here to really go in depth about how replays store information. Rivals of Aether makes out job parsing replays much easier by storing replays in plain text format. With some help from the RoA community we have been able to break down what a lot of the symbols mean. I am going to go in a little more depth about what data we can extract from the file and why we care about it. Line One The first line of the file contains basic meta-data about the replay itself. The line is formatted as fol...

Rei - Captsone Blog Post 3

Over the past couple of weeks, progress on this project has been slow but meaningful. Matthew and I have decided to do a temporary re-scoping of the project. Instead of focusing on a 3-D game, we are going to move to a simpler 2-D game. The game is the only real change we have made though, as we still want it to infer a game state from the visual buffer. The game we have chosen is the 2-D pixel fighter  Rivals of Aether .  We chose this game primarily because of how it outputs replay data. Rivals of Aether stores its replay data as plain-text. More specifically it stores input data as 'tuples' of (InputFrame, Input) for example '5134y' is saying press the 'Y' button on frame 5134. Using this we can gather more data for our RivalsAgent to learn from. Currently, the plan Matthew and I have agreed upon is to work primarily with Rivals of Aether. If our implementation works well and we feel as if we can safely scope-up, we will move from Rivals to Qua...

Rei - Capstone Blog Post 2

This week, I wanted to look more deeply at current game AIs and try to get a deeper understanding of what Machine Learning AIs created for video games look like. I started by looking at one of the larger Computer Vision AIs, ViZDoom. ViZDoom, according to the official website , is a "Doom-based AI research platform for reinforcement learning from raw visual information." ViZDoom sets out to accomplish a goal similar to ours, make an AI that can play Doom using only the screen buffer. The research group holds annual competitions, the competitions allows many developers to test their AI tweaks against others which results in some pretty competent AI players. After looking into ViZDoom and learning about more Algorithms in AI, I decided to look at some of the really amazing Game AIs that are coming into public view. I found a video that explained AlphaGo. Which I found that I understood, at least understood better than I would have earlier. AlphaGo's math is p...

Rei - Capstone Blog Post 1

Over the past couple of weeks, Matthew and I have been trying to narrow down our idea for capstone. We have settled on a "Modular" AI that can play First Person Shooters or other similar video games. However, we decided to put a slight twist on the idea of an AI playing games. Most of the AIs that are currently out have more information than they should possibly have at that time, like the location of players. We decided that our AI would only have information that would be accessible to a human player. We also noticed that many of the "PlayerAIs" out there are reactionary, not planning. While reacting is a key part to many of these games, so is strategy. We want to create an AI that thinks, at least a little bit, about that actions it is making or should make. Since narrowing down our topic we have split off and started looking at different existing technologies and research that could help us understand and create this project. I decided to look at some comp...