Skip to main content

Posts

Rei - Captsone Blog Post 3

Over the past couple of weeks, progress on this project has been slow but meaningful. Matthew and I have decided to do a temporary re-scoping of the project. Instead of focusing on a 3-D game, we are going to move to a simpler 2-D game. The game is the only real change we have made though, as we still want it to infer a game state from the visual buffer. The game we have chosen is the 2-D pixel fighter  Rivals of Aether .  We chose this game primarily because of how it outputs replay data. Rivals of Aether stores its replay data as plain-text. More specifically it stores input data as 'tuples' of (InputFrame, Input) for example '5134y' is saying press the 'Y' button on frame 5134. Using this we can gather more data for our RivalsAgent to learn from. Currently, the plan Matthew and I have agreed upon is to work primarily with Rivals of Aether. If our implementation works well and we feel as if we can safely scope-up, we will move from Rivals to Qua...

Matthew - Capstone Blog Post 2

In Towards Integrated Imitation of Strategic Planning and Motion Modeling in Interactive Computer Games , Dublin City University researchers Bernard Gorman and Mark Humphrys detail their experiments in using imitation learning techniques to teach an artificial agent to play a first person shooter—in this case, Quake—in a way that would convince onlookers and other players that it is human. They describe a superior agent , which imitates "the observed goal-oriented behaviors of a human player," (p. 2) in order to play with competence, exhibit strategic thinking, and employ "believably human-like motion" (p. 1). In other words, they want to create a bot that can pass a kind of Turing deathmatch. Gorman and Humphrys describe the behavior model which serves as the basis for their work. The model specifies several levels of control, where each level corresponds with roughly how much time the agent has to form a plan. These range from immediate ("scrambled control...

Rei - Capstone Blog Post 2

This week, I wanted to look more deeply at current game AIs and try to get a deeper understanding of what Machine Learning AIs created for video games look like. I started by looking at one of the larger Computer Vision AIs, ViZDoom. ViZDoom, according to the official website , is a "Doom-based AI research platform for reinforcement learning from raw visual information." ViZDoom sets out to accomplish a goal similar to ours, make an AI that can play Doom using only the screen buffer. The research group holds annual competitions, the competitions allows many developers to test their AI tweaks against others which results in some pretty competent AI players. After looking into ViZDoom and learning about more Algorithms in AI, I decided to look at some of the really amazing Game AIs that are coming into public view. I found a video that explained AlphaGo. Which I found that I understood, at least understood better than I would have earlier. AlphaGo's math is p...

Matthew - Capstone Blog Post 1

First I would like to discuss our goals and long-term plans. We want to create an artificial intelligence that learns how to play first-person shooters without access to any game state data other than what can be gained through the audio and visual buffers. In other words, the AI will only have access to the information that a human player would. If we are successful with these constraints, then I could see our work leading to AI that can play any game, or even AI-driven robots that can play games using mechanical hands and eyes to operate normal peripherals and interfaces. We are currently in the research and planning phase of the project. This means that our job right now is to decide exactly what tools and technologies we will use and how they will interact with one another. By the end of the semester, which will arrive sometime this December, we will need to be ready for phase two, which is the development, training, and testing phase. Yes, that is all three at once. However, if...

Rei - Capstone Blog Post 1

Over the past couple of weeks, Matthew and I have been trying to narrow down our idea for capstone. We have settled on a "Modular" AI that can play First Person Shooters or other similar video games. However, we decided to put a slight twist on the idea of an AI playing games. Most of the AIs that are currently out have more information than they should possibly have at that time, like the location of players. We decided that our AI would only have information that would be accessible to a human player. We also noticed that many of the "PlayerAIs" out there are reactionary, not planning. While reacting is a key part to many of these games, so is strategy. We want to create an AI that thinks, at least a little bit, about that actions it is making or should make. Since narrowing down our topic we have split off and started looking at different existing technologies and research that could help us understand and create this project. I decided to look at some comp...