{"title":"A Two-Level Neural-RL-Based Approach for Hierarchical Multiplayer Systems Under Mismatched Uncertainties","authors":"Xiangnan Zhong;Zhen Ni","doi":"10.1109/TAI.2024.3493833","DOIUrl":null,"url":null,"abstract":"AI and reinforcement learning (RL) have attracted great attention in the study of multiplayer systems over the past decade. Despite the advances, most of the studies are focused on synchronized decision-making to attain Nash equilibrium, where all the players take actions simultaneously. On the other hand, however, in complex applications, certain players may have an advantage in making sequential decisions and this situation introduces a hierarchical structure and influences how other players respond. The control design for such system is challenging since it relies on solving the coupled Hamilton–Jacobi equation. The situation becomes more difficult when the learning process is exposed to complex uncertainties with unreliable data being exchanged. Therefore, in this article, we develop a new learning-based control approach for a class of nonlinear hierarchical multiplayer systems subject to mismatched uncertainties. Specifically, we first formulate this new problem as a multiplayer Stackelberg–Nash game in conjunction with the hierarchical robust–optimal transformation. Theoretical analysis confirms the equivalence of this transformation and ensures that the designed control policies can achieve stable equilibrium. Then, a two-level neural-RL-based approach is developed to automatically and adaptively learn the solutions. The stability of this online learning process is also provided. Finally, two numerical examples are presented to demonstrate the effectiveness of the developed learning-based robust control design.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 3","pages":"759-772"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10747770/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
AI and reinforcement learning (RL) have attracted great attention in the study of multiplayer systems over the past decade. Despite the advances, most of the studies are focused on synchronized decision-making to attain Nash equilibrium, where all the players take actions simultaneously. On the other hand, however, in complex applications, certain players may have an advantage in making sequential decisions and this situation introduces a hierarchical structure and influences how other players respond. The control design for such system is challenging since it relies on solving the coupled Hamilton–Jacobi equation. The situation becomes more difficult when the learning process is exposed to complex uncertainties with unreliable data being exchanged. Therefore, in this article, we develop a new learning-based control approach for a class of nonlinear hierarchical multiplayer systems subject to mismatched uncertainties. Specifically, we first formulate this new problem as a multiplayer Stackelberg–Nash game in conjunction with the hierarchical robust–optimal transformation. Theoretical analysis confirms the equivalence of this transformation and ensures that the designed control policies can achieve stable equilibrium. Then, a two-level neural-RL-based approach is developed to automatically and adaptively learn the solutions. The stability of this online learning process is also provided. Finally, two numerical examples are presented to demonstrate the effectiveness of the developed learning-based robust control design.