{"title":"An introduction to learning in finite games","authors":"J. Shamma","doi":"10.23919/ACC55779.2023.10156273","DOIUrl":null,"url":null,"abstract":"In the setting of learning in games, player strategies evolve in an effort to maximize utility in response to the evolving strategies of other players. In contrast to the single agent case, learning in the presence of other learners induces a non-stationary environment from the perspective of any individual player. Depending on the specifics of the game and the learning dynamics, the evolving strategies may exhibit a variety of behaviors ranging from convergence to Nash equilibrium to oscillations to even chaos. This talk presents a basic introduction to learning in games through the presentation of selected results for finite normal form games, i.e., games with a finite number of players having a finite number of actions. The talk starts with a representative sample of learning dynamics that converge to Nash equilibrium for special classes of games. Specific learning dynamics include better reply dynamics, joint strategy fictitious play, and log-linear learning, with results for potential games and weakly acyclic games. These results apply to specifically pure Nash equilibrium. The talk also presents dynamics that address mixed/randomized strategy Nash equilibria, specifically smooth fictitious play and gradient play. The talk concludes with limitations in learning that stem from the notion of uncoupled dynamics, where a player’s learning dynamics cannot depend explicitly on the utility functions of other players.","PeriodicalId":397401,"journal":{"name":"2023 American Control Conference (ACC)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 American Control Conference (ACC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/ACC55779.2023.10156273","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In the setting of learning in games, player strategies evolve in an effort to maximize utility in response to the evolving strategies of other players. In contrast to the single agent case, learning in the presence of other learners induces a non-stationary environment from the perspective of any individual player. Depending on the specifics of the game and the learning dynamics, the evolving strategies may exhibit a variety of behaviors ranging from convergence to Nash equilibrium to oscillations to even chaos. This talk presents a basic introduction to learning in games through the presentation of selected results for finite normal form games, i.e., games with a finite number of players having a finite number of actions. The talk starts with a representative sample of learning dynamics that converge to Nash equilibrium for special classes of games. Specific learning dynamics include better reply dynamics, joint strategy fictitious play, and log-linear learning, with results for potential games and weakly acyclic games. These results apply to specifically pure Nash equilibrium. The talk also presents dynamics that address mixed/randomized strategy Nash equilibria, specifically smooth fictitious play and gradient play. The talk concludes with limitations in learning that stem from the notion of uncoupled dynamics, where a player’s learning dynamics cannot depend explicitly on the utility functions of other players.