Skip to main content

Rei Blog Post 6

We are back after a long break, while Matthew was able to work on the project over break, much of my time was taken by my job. I then developed the flu, so the time I was able to work was much less then my partner's. Anyway, now that we are done with excuses, lets talk about what I have been working on.

Now that I have recovered a bit more and am capable of working again, I have begun work on a replay parser. We have spoken about replays in the past however, I would like to take the time here to really go in depth about how replays store information.

Rivals of Aether makes out job parsing replays much easier by storing replays in plain text format. With some help from the RoA community we have been able to break down what a lot of the symbols mean. I am going to go in a little more depth about what data we can extract from the file and why we care about it.

Line One

The first line of the file contains basic meta-data about the replay itself. The line is formatted as follows.

'is the replays starred? || Version of replay || Date and time of the replay || formatted date-time '

The data that we are really looking at here is the version of the replay, because replays can only be played on the version they are "recorded" in, we need to separate our replay files into batches by version.

Matthew talks more about how we broke up the files in their post.

Line Two

This line contains information about the rules of the game. The line is formatted as follows:

'stage type || stage id || stock count || time limit || team switch || friendly fire switch || online play? || p1 information || p2 information.'

at this point in time, it is difficult to know what of this we will need. The stage type and stage id will be useful for further breaking up the batches, it is possible that the AI will associate techniques with stages so this information may be useful. The stock count will most likely also come into play. But until we see our AI play, it will be hard to know.

Replay Lines

The first two lines will always contain information in exactly this format, after this we get data that can vary. The next line will contain information about the player, if the line starts with an H we know the player is in fact human, the rest of that line contains information about the player, like what character they are playing and their color pallet. We do care what character they are playing, so we will need to extract that data.

 If the player is a human, then the next line will be all of their input data from the game. The fact that this is all on one line means it will be incredibly easy to pull out from the file, just doing a read line will allow us to have all that players inputs. Where this process gets messy is breaking up the inputs into readable tokens. The post where we got most of this data from has a handy chart to help us match data with input, I will include it below, but I will also link the original post here.

The number to the left of a letter is the frame that an input takes place on.
The first frame that a player can take action on is frame 126.
An uppercase letter means that a button is being pressed, and a lowercase button means that a button is being released.
Tapping a direction only has an uppercase variant and only lasts for one frame.
E=tap left
I=tap right
M=tap up
O=tap down
Ff=fstrong to the left
Gg=fstrong to the right
Zz=Enable or disable angles for airdodges and specials
y  0=angled right
y 90=angled up
y180=angled left
y270=angled down

These lines will repeat until we fill up all the player slots, or until we run out of players.

because there are no spaces between inputs, but spaces appear as part of inputs, it will be a bit more difficult to parse in these files then we hoped. The main trouble here comes from the y inputs, because the three. numbers after them do in fact count as part of the input. If this was not the case, we could just use the letters as breaks.

We will also need to really focus on syncing this input with the images coming in from SerpentAI, which will be difficult.

The next couple of weeks will be dedicated to working on the parsing and the syncing of data. 


Popular posts from this blog

Matthew - Capstone Blog Post 1

First I would like to discuss our goals and long-term plans. We want to create an artificial intelligence that learns how to play first-person shooters without access to any game state data other than what can be gained through the audio and visual buffers. In other words, the AI will only have access to the information that a human player would. If we are successful with these constraints, then I could see our work leading to AI that can play any game, or even AI-driven robots that can play games using mechanical hands and eyes to operate normal peripherals and interfaces.

We are currently in the research and planning phase of the project. This means that our job right now is to decide exactly what tools and technologies we will use and how they will interact with one another. By the end of the semester, which will arrive sometime this December, we will need to be ready for phase two, which is the development, training, and testing phase. Yes, that is all three at once. However, if …

Rei - Capstone Blog Post 1

Over the past couple of weeks, Matthew and I have been trying to narrow down our idea for capstone. We have settled on a "Modular" AI that can play First Person Shooters or other similar video games. However, we decided to put a slight twist on the idea of an AI playing games. Most of the AIs that are currently out have more information than they should possibly have at that time, like the location of players. We decided that our AI would only have information that would be accessible to a human player. We also noticed that many of the "PlayerAIs" out there are reactionary, not planning. While reacting is a key part to many of these games, so is strategy. We want to create an AI that thinks, at least a little bit, about that actions it is making or should make.

Since narrowing down our topic we have split off and started looking at different existing technologies and research that could help us understand and create this project. I decided to look at some computer…

Matthew - Capstone Blog Post 4

Finally, our CSI-480 (Advanced Topics: AI) course material is catching up to where we need to be. We are covering perceptrons and sigmoid neurons in the lectures, and we are also using TensorFlow to solve some very simple introductory problems (via tutorials). To supplement this I have been reading Neural Networks and Deep Learning by Michael Nielsen, a textbook available for free on the internet, which dives into neural networks right from the first chapter. Additionally, I have been finding 3Blue1Brown's multi-part video series about deep learning to be extremely helpful for visualizing some of the more advanced concepts. Even if I do not fully understand the calculus and linear algebra involved, at the very least I have a better idea of what goes on inside of neural networks. For example: I know what loss and gradient descentalgorithmsdo, essentially, and I also understand how the latter helps find a local minimum for the former, but I do not necessarily feel confident in my …