{"title":"简化扑克中基于期望-最大化和序列预测的对手建模","authors":"Richard Mealing, J. Shapiro","doi":"10.1109/TCIAIG.2015.2491611","DOIUrl":null,"url":null,"abstract":"We consider the problem of learning an effective strategy online in a hidden information game against an opponent with a changing strategy. We want to model and exploit the opponent and make three proposals to do this; first, to infer its hidden information using an expectation–maximization (EM) algorithm; second, to predict its actions using a sequence prediction method; and third, to simulate games between our agent and our opponent model in-between games against the opponent. Our approach does not require knowledge outside the rules of the game, and does not assume that the opponent’s strategy is stationary. Experiments in simplified poker games show that it increases the average payoff per game of a state-of-the-art no-regret learning algorithm.","PeriodicalId":49192,"journal":{"name":"IEEE Transactions on Computational Intelligence and AI in Games","volume":"9 1","pages":"11-24"},"PeriodicalIF":0.0000,"publicationDate":"2017-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TCIAIG.2015.2491611","citationCount":"18","resultStr":"{\"title\":\"Opponent Modeling by Expectation–Maximization and Sequence Prediction in Simplified Poker\",\"authors\":\"Richard Mealing, J. Shapiro\",\"doi\":\"10.1109/TCIAIG.2015.2491611\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We consider the problem of learning an effective strategy online in a hidden information game against an opponent with a changing strategy. We want to model and exploit the opponent and make three proposals to do this; first, to infer its hidden information using an expectation–maximization (EM) algorithm; second, to predict its actions using a sequence prediction method; and third, to simulate games between our agent and our opponent model in-between games against the opponent. Our approach does not require knowledge outside the rules of the game, and does not assume that the opponent’s strategy is stationary. Experiments in simplified poker games show that it increases the average payoff per game of a state-of-the-art no-regret learning algorithm.\",\"PeriodicalId\":49192,\"journal\":{\"name\":\"IEEE Transactions on Computational Intelligence and AI in Games\",\"volume\":\"9 1\",\"pages\":\"11-24\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1109/TCIAIG.2015.2491611\",\"citationCount\":\"18\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Computational Intelligence and AI in Games\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TCIAIG.2015.2491611\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computational Intelligence and AI in Games","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TCIAIG.2015.2491611","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Computer Science","Score":null,"Total":0}
Opponent Modeling by Expectation–Maximization and Sequence Prediction in Simplified Poker
We consider the problem of learning an effective strategy online in a hidden information game against an opponent with a changing strategy. We want to model and exploit the opponent and make three proposals to do this; first, to infer its hidden information using an expectation–maximization (EM) algorithm; second, to predict its actions using a sequence prediction method; and third, to simulate games between our agent and our opponent model in-between games against the opponent. Our approach does not require knowledge outside the rules of the game, and does not assume that the opponent’s strategy is stationary. Experiments in simplified poker games show that it increases the average payoff per game of a state-of-the-art no-regret learning algorithm.
期刊介绍:
Cessation. The IEEE Transactions on Computational Intelligence and AI in Games (T-CIAIG) publishes archival journal quality original papers in computational intelligence and related areas in artificial intelligence applied to games, including but not limited to videogames, mathematical games, human–computer interactions in games, and games involving physical objects. Emphasis is placed on the use of these methods to improve performance in and understanding of the dynamics of games, as well as gaining insight into the properties of the methods as applied to games. It also includes using games as a platform for building intelligent embedded agents for the real world. Papers connecting games to all areas of computational intelligence and traditional AI are considered.