{"title":"Learning in games: a systematic review","authors":"Rong-Jun Qin, Yang Yu","doi":"10.1007/s11432-023-3955-x","DOIUrl":null,"url":null,"abstract":"<p>Game theory studies the mathematical models for self-interested individuals. Nash equilibrium is arguably the most central solution in game theory. While finding the Nash equilibrium in general is known as polynomial parity arguments on directed graphs (PPAD)-complete, learning in games provides an alternative to approximate Nash equilibrium, which iteratively updates the player’s strategy through interactions with other players. Rules and models have been developed for learning in games, such as fictitious play and no-regret learning. Particularly, with recent advances in online learning and deep reinforcement learning, techniques from these fields greatly boost the breakthroughs in learning in games from theory to application. As a result, we have witnessed many superhuman game AI systems. The techniques used in these systems evolve from conventional search and learning to purely reinforcement learning (RL)-style learning methods, gradually getting rid of the domain knowledge. In this article, we systematically review the above techniques, discuss the trend of basic learning rules towards a unified framework, and recap applications in large games. Finally, we discuss some future directions and make the prospect of future game AI systems. We hope this article will give some insights into designing novel approaches.</p>","PeriodicalId":21618,"journal":{"name":"Science China Information Sciences","volume":"176 1","pages":""},"PeriodicalIF":7.3000,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Science China Information Sciences","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11432-023-3955-x","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Game theory studies the mathematical models for self-interested individuals. Nash equilibrium is arguably the most central solution in game theory. While finding the Nash equilibrium in general is known as polynomial parity arguments on directed graphs (PPAD)-complete, learning in games provides an alternative to approximate Nash equilibrium, which iteratively updates the player’s strategy through interactions with other players. Rules and models have been developed for learning in games, such as fictitious play and no-regret learning. Particularly, with recent advances in online learning and deep reinforcement learning, techniques from these fields greatly boost the breakthroughs in learning in games from theory to application. As a result, we have witnessed many superhuman game AI systems. The techniques used in these systems evolve from conventional search and learning to purely reinforcement learning (RL)-style learning methods, gradually getting rid of the domain knowledge. In this article, we systematically review the above techniques, discuss the trend of basic learning rules towards a unified framework, and recap applications in large games. Finally, we discuss some future directions and make the prospect of future game AI systems. We hope this article will give some insights into designing novel approaches.
期刊介绍:
Science China Information Sciences is a dedicated journal that showcases high-quality, original research across various domains of information sciences. It encompasses Computer Science & Technologies, Control Science & Engineering, Information & Communication Engineering, Microelectronics & Solid-State Electronics, and Quantum Information, providing a platform for the dissemination of significant contributions in these fields.