Ph.D thesis of Gabriel Synnaeve (2012)
The goal of this work is to explore how Bayesian models can be used for adaptive and dynamic AI in video games in order to bring more fun to the player. Video games also require robust and often times even controlable AIs.
Multiplayer video games are a convenient in-between of the real world and simulators, to develop and benchmark AI techniques. Indeed, they are simulated worlds: no sensors problems, finite worlds, limited parameters; and yet, the other players are human beeings (or advanced robots in the case of AI competitions).
The purposes of this work are three folds:
Player viewpoint: The goal is to achieve “fun to play against” AI, by having more competitive (stronger) and less predictible (adaptative) AIs.
Game developers viewpoint: Have additional tools to benchmark/balance the game and explore the meta-game and have an easy framework to easily develop complex/adaptative AI through machine learning.
Theoretical viewpoint: The goal is to use robotics/sensory-motor systems designed algorithms and techniques (Bayesian programming) to test their validity with different problems (games AI).
In First Person Shooters (see Le Hy) and particularly in Massively Multiplayer Online Role Playing Games, this could result in intelligent “pets” or players’ replacing bots (when disconnected or away from keyboard) tuned to their own playstyle by learning.
In Real-Time Strategy games, Bayesian approaches are interesting to tackle all the different levels of reasoning: from strategic reasoning (higher level, learning parameters from data extracted from previous games, also known as “replays”) to tactical reasoning (map reasoning with enemy and time constraints) and micro-management (multiple units coordination).