A Two-Level Neural-RL-Based Approach for Hierarchical Multiplayer Systems Under Mismatched Uncertainties

Xiangnan Zhong;Zhen Ni
{"title":"A Two-Level Neural-RL-Based Approach for Hierarchical Multiplayer Systems Under Mismatched Uncertainties","authors":"Xiangnan Zhong;Zhen Ni","doi":"10.1109/TAI.2024.3493833","DOIUrl":null,"url":null,"abstract":"AI and reinforcement learning (RL) have attracted great attention in the study of multiplayer systems over the past decade. Despite the advances, most of the studies are focused on synchronized decision-making to attain Nash equilibrium, where all the players take actions simultaneously. On the other hand, however, in complex applications, certain players may have an advantage in making sequential decisions and this situation introduces a hierarchical structure and influences how other players respond. The control design for such system is challenging since it relies on solving the coupled Hamilton–Jacobi equation. The situation becomes more difficult when the learning process is exposed to complex uncertainties with unreliable data being exchanged. Therefore, in this article, we develop a new learning-based control approach for a class of nonlinear hierarchical multiplayer systems subject to mismatched uncertainties. Specifically, we first formulate this new problem as a multiplayer Stackelberg–Nash game in conjunction with the hierarchical robust–optimal transformation. Theoretical analysis confirms the equivalence of this transformation and ensures that the designed control policies can achieve stable equilibrium. Then, a two-level neural-RL-based approach is developed to automatically and adaptively learn the solutions. The stability of this online learning process is also provided. Finally, two numerical examples are presented to demonstrate the effectiveness of the developed learning-based robust control design.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 3","pages":"759-772"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10747770/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

AI and reinforcement learning (RL) have attracted great attention in the study of multiplayer systems over the past decade. Despite the advances, most of the studies are focused on synchronized decision-making to attain Nash equilibrium, where all the players take actions simultaneously. On the other hand, however, in complex applications, certain players may have an advantage in making sequential decisions and this situation introduces a hierarchical structure and influences how other players respond. The control design for such system is challenging since it relies on solving the coupled Hamilton–Jacobi equation. The situation becomes more difficult when the learning process is exposed to complex uncertainties with unreliable data being exchanged. Therefore, in this article, we develop a new learning-based control approach for a class of nonlinear hierarchical multiplayer systems subject to mismatched uncertainties. Specifically, we first formulate this new problem as a multiplayer Stackelberg–Nash game in conjunction with the hierarchical robust–optimal transformation. Theoretical analysis confirms the equivalence of this transformation and ensures that the designed control policies can achieve stable equilibrium. Then, a two-level neural-RL-based approach is developed to automatically and adaptively learn the solutions. The stability of this online learning process is also provided. Finally, two numerical examples are presented to demonstrate the effectiveness of the developed learning-based robust control design.
不匹配不确定性下分层多人系统的两级神经-强化学习方法
在过去的十年中,人工智能和强化学习(RL)在多人系统的研究中引起了极大的关注。尽管取得了进步,但大多数研究都集中在同步决策上,以达到纳什均衡,即所有参与者同时采取行动。然而,另一方面,在复杂的应用中,某些玩家可能在做出顺序决策方面具有优势,这种情况引入了层次结构,并影响了其他玩家的反应。该系统的控制设计具有挑战性,因为它依赖于求解耦合的Hamilton-Jacobi方程。当学习过程暴露于复杂的不确定性和交换的不可靠数据时,情况变得更加困难。因此,在本文中,我们开发了一种新的基于学习的控制方法,用于一类具有不匹配不确定性的非线性分层多人系统。具体来说,我们首先将这个新问题表述为与分层鲁棒最优变换相结合的多人Stackelberg-Nash博弈。理论分析证实了这种转换的等价性,并保证了所设计的控制策略能够达到稳定的均衡。然后,开发了一种基于两级神经强化学习的方法来自动自适应地学习解。这种在线学习过程的稳定性也得到了保证。最后给出了两个数值算例,验证了所提出的基于学习的鲁棒控制设计的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.70
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信