{"title":"动态图形游戏在线策略迭代解决方案","authors":"M. Abouheaf, M. Mahmoud","doi":"10.1109/SSD.2016.7473693","DOIUrl":null,"url":null,"abstract":"The dynamic graphical game is a special class of the standard dynamic game and explicitly captures the structure of a communication graph, where the information flow between the agents is governed by the communication graph topology. A novel online adaptive learning (policy iteration) solution for the graphical game is given in terms of the solution to a set of coupled graphical game Hamiltonian and Bellman equations. The policy iteration solution is developed to learn Nash solution for the dynamic graphical game online in real-time. Policy iteration convergence proof for the dynamic graphical game is given under mild condition about the graph interconnectivity properties. Critic neural network structures are used to implement the online policy iteration solution. Only partial knowledge of the dynamics is required and the tuning is done in a distributed fashion in terms of the local information available to each agent.","PeriodicalId":149580,"journal":{"name":"2016 13th International Multi-Conference on Systems, Signals & Devices (SSD)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Online policy iteration solution for dynamic graphical games\",\"authors\":\"M. Abouheaf, M. Mahmoud\",\"doi\":\"10.1109/SSD.2016.7473693\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The dynamic graphical game is a special class of the standard dynamic game and explicitly captures the structure of a communication graph, where the information flow between the agents is governed by the communication graph topology. A novel online adaptive learning (policy iteration) solution for the graphical game is given in terms of the solution to a set of coupled graphical game Hamiltonian and Bellman equations. The policy iteration solution is developed to learn Nash solution for the dynamic graphical game online in real-time. Policy iteration convergence proof for the dynamic graphical game is given under mild condition about the graph interconnectivity properties. Critic neural network structures are used to implement the online policy iteration solution. Only partial knowledge of the dynamics is required and the tuning is done in a distributed fashion in terms of the local information available to each agent.\",\"PeriodicalId\":149580,\"journal\":{\"name\":\"2016 13th International Multi-Conference on Systems, Signals & Devices (SSD)\",\"volume\":\"58 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-03-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 13th International Multi-Conference on Systems, Signals & Devices (SSD)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SSD.2016.7473693\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 13th International Multi-Conference on Systems, Signals & Devices (SSD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSD.2016.7473693","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Online policy iteration solution for dynamic graphical games
The dynamic graphical game is a special class of the standard dynamic game and explicitly captures the structure of a communication graph, where the information flow between the agents is governed by the communication graph topology. A novel online adaptive learning (policy iteration) solution for the graphical game is given in terms of the solution to a set of coupled graphical game Hamiltonian and Bellman equations. The policy iteration solution is developed to learn Nash solution for the dynamic graphical game online in real-time. Policy iteration convergence proof for the dynamic graphical game is given under mild condition about the graph interconnectivity properties. Critic neural network structures are used to implement the online policy iteration solution. Only partial knowledge of the dynamics is required and the tuning is done in a distributed fashion in terms of the local information available to each agent.