Asymmetric Abstractions for Adversarial Settings

Nolan Bard, Michael Johanson, and Michael Bowling. Asymmetric Abstractions for Adversarial Settings. In Proceedings of the Thirteenth International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2014.

Download


Abstract

In multiagent domains, an agent’s beliefs about how other agents will or could act plays a significant role in their own behaviour. In large domains where it is infeasible to uniquely represent every possible decision an agent will face, abstraction is often used to collapse the state and action space to make the problem tractable. By abstracting other agents’ views of the environment, the agent makes assumptions about how other agents act. Incorrect abstraction choices can yield less than ideal performance as other agents may, in reality, use coarser or finer abstraction than they were modelled with. The standard approach when abstracting is to use symmetric abstraction: where all agents are assumed to distinguish states in the same way. This work examines the benefits and potential pitfalls of using asymmetric abstractions in two-player zero-sum extensive-form games. Using the domain of two-player limit Texas hold’em poker, we investigate the performance of strategies using both symmetric and asymmetric abstractions in terms of in-game utility and worst-case utility in the real game. Furthermore, we show that combining asymmetric abstractions with robust counter-strategy techniques can produce counter-strategies which dominate their symmetric abstraction counterparts in terms of both exploitative power and worst-case utility.


Notes

Game solving algorithms such as CFR give us an efficient way to compute a Nash equilibrium for large games such as Texas hold'em poker. However, even though these algorithms are efficient in time and memory, all of the human-scale poker games that we are interested in are still too large to solve exactly even with large clusters of computers. Instead of solving these games directly, a standard approach is to use an abstraction technique to simplify the game, solve this simpler game, and then use the resulting strategy to play the real game. The result isn't quite an optimal strategy for the real game, because the abstraction techniques necessary to get to a solvable game are lossy. However, we can still measure how good these strategies are through measuring their exploitability with a real-game best-response computation, or through matches against other strategies. In addition to computing Nash equilibrium strategies, this abstraction-solving-translation approach is also used to compute counter-strategies from data about the opponent, using the Restricted Nash Response or Data Biased Response algorithms.


The standard approach when using abstraction is to use the same abstraction technique for all players in a game, such that all of the players have the same kind of simplification inflicted on them. This seems "fair" in that the resulting strategy doesn't artificially punish one player with a worse representation of the cards, and we may expect it to generate a strategy that minimizes exploitability and be fairly close to a Nash equilibrium. When we use one of these strategies in a match against an actual opponent, we are in fact making an assumption about the opponent: that we believe they are about as strong as we are, and that our strategy should behave accordingly.


However, these "symmetric" abstractions are not the only option. In practice, we can use different qualities of abstraction for all players in the game. In this paper, we investigate these asymmetric abstraction choices for the first time, and find that there are useful properties in the resulting strategies. It is useful to think of this choice in solving a game between "our" agent and an "opponent" agent, where in our later games and evaluations we will only use "our" half of the strategy. In general, if we solve a game where our abstraction is of higher quality than the opponent's, then our resulting strategy will be both more exploitable in the real game, but will also win more in one-on-one matches against a range of adversaries. The way that our strategy plays offense against an assumed weaker opponent appears to be generally applicable against many types of weaker opponents.On the other hand, if our abstraction is weaker than the opponent's, then the resulting strategy tends to be both less exploitable, but also win less in actual matches. This defensive response to a slightly stronger opponent appears correlated with how we would defend ourselves against a true real-game best-response opponent. This ability to prioritize either one-on-one performance or worst-case exploitability through changing abstraction sizes gives us another useful tool for creating poker strategies.


In addition to these general trends, which we measured in heads-up limit Texas hold'em using well-known abstraction techniques, we also discovered the first abstraction pathologies in a non-trivial poker game: situations where we can increase the size of both players' abstractions and have the resulting strategy be moreexploitable in the real game. This counterintuitive result had been found by Waugh et al. in the small game of Leduc hold'em, but had not yet been found in large human-scale games.


In addition to exploring the effects of asymmetric abstraction in equilibrium play, we also investigate its use in the opponent modelling and robust counter-strategy task. In this setting, a third abstraction is used: the abstraction within which we create our model of the opponent's behaviour. A large abstraction allows us to more accurately represent the opponent, but we also require more data from which to learn their strategy in the larger space. A smaller abstraction might not be as accurate, but can also be learned from a much smaller amount of data. Through experiments in which we vary all of our abstraction, our opponent model abstraction, and our opponent's abstraction, we can produce robust counter-strategies that are no more exploitable in the real game than an abstract game Nash equilibrium is, but have greatly increased payoffs against a target opponent.


Links

BibTeX

@InProceedings{ 2014aamas-asymmetric-abstractions,
  Title = "Asymmetric Abstractions for Adversarial Settings",
  Author = "Nolan Bard and Michael Johanson and Michael Bowling",
  Booktitle = "Proceedings of the Thirteenth International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS-14)",
  Year = "2014"
}