{"title":"多智能体博弈中的被动、强化学习和学习","authors":"Lacra Pavel","doi":"10.23919/ACC55779.2023.10156507","DOIUrl":null,"url":null,"abstract":"Learning algorithm behavior highly depends on the game setting. In this tutorial talk, we discuss how these dependencies can be explained, if one regards them through a passivity lens. We focus on two representative instances in reinforcement learning: payoff-based play, and Q-learning. We show how one can exploit geometric features of different classes of games, together with dissipativity/passivity properties of interconnected systems to guarantee global convergence to a Nash equilibrium. Besides simplifying the proof of convergence, one can generate algorithms that work for classes of games with less stringent assumptions, by using passivity and basic properties of interconnected systems.","PeriodicalId":397401,"journal":{"name":"2023 American Control Conference (ACC)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Passivity, RL and Learning in Multi-Agent Games\",\"authors\":\"Lacra Pavel\",\"doi\":\"10.23919/ACC55779.2023.10156507\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Learning algorithm behavior highly depends on the game setting. In this tutorial talk, we discuss how these dependencies can be explained, if one regards them through a passivity lens. We focus on two representative instances in reinforcement learning: payoff-based play, and Q-learning. We show how one can exploit geometric features of different classes of games, together with dissipativity/passivity properties of interconnected systems to guarantee global convergence to a Nash equilibrium. Besides simplifying the proof of convergence, one can generate algorithms that work for classes of games with less stringent assumptions, by using passivity and basic properties of interconnected systems.\",\"PeriodicalId\":397401,\"journal\":{\"name\":\"2023 American Control Conference (ACC)\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 American Control Conference (ACC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/ACC55779.2023.10156507\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 American Control Conference (ACC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/ACC55779.2023.10156507","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Learning algorithm behavior highly depends on the game setting. In this tutorial talk, we discuss how these dependencies can be explained, if one regards them through a passivity lens. We focus on two representative instances in reinforcement learning: payoff-based play, and Q-learning. We show how one can exploit geometric features of different classes of games, together with dissipativity/passivity properties of interconnected systems to guarantee global convergence to a Nash equilibrium. Besides simplifying the proof of convergence, one can generate algorithms that work for classes of games with less stringent assumptions, by using passivity and basic properties of interconnected systems.