{"title":"Opponent modeling with incremental active learning: A case study of Iterative Prisoner's Dilemma","authors":"Hyun-Soo Park, Kyung-Joong Kim","doi":"10.1109/CIG.2013.6633665","DOIUrl":null,"url":null,"abstract":"What's the most important sources of information to guess the internal strategy of your opponents? The best way is to play games against them and infer their strategy from the experience. For novice players, they should play lot of games to identify other's strategy successfully. However, experienced players usually play small number of games to model other's strategy. The secret is that they intelligently design their plays to maximize the chance of discovering the most uncertain parts. Similarly, in this paper, we propose to use an incremental active learning for modeling opponents. It refines the other's models incrementally by cycling “estimation (inference)“ and “exploration (playing games)” steps. Experimental results with Iterative Prisoner's Dilemma games show that the proposed method can reveal other's strategy successfully.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIG.2013.6633665","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
What's the most important sources of information to guess the internal strategy of your opponents? The best way is to play games against them and infer their strategy from the experience. For novice players, they should play lot of games to identify other's strategy successfully. However, experienced players usually play small number of games to model other's strategy. The secret is that they intelligently design their plays to maximize the chance of discovering the most uncertain parts. Similarly, in this paper, we propose to use an incremental active learning for modeling opponents. It refines the other's models incrementally by cycling “estimation (inference)“ and “exploration (playing games)” steps. Experimental results with Iterative Prisoner's Dilemma games show that the proposed method can reveal other's strategy successfully.