Weiran Guo;Guanjun Liu;Ziyuan Zhou;Jiacun Wang;Ying Tang;Miaomiao Wang
{"title":"Robust Training in Multiagent Deep Reinforcement Learning Against Optimal Adversary","authors":"Weiran Guo;Guanjun Liu;Ziyuan Zhou;Jiacun Wang;Ying Tang;Miaomiao Wang","doi":"10.1109/TSMC.2025.3561276","DOIUrl":null,"url":null,"abstract":"Industry 5.0 enhances manufacturing ability through efficient human-machine interaction, combining human resources and robots to complete tasks more accurately and effectively. Artificial intelligence (AI) plays an essential role in Industry 5.0. As a branch in AI, multiagent deep reinforcement learning (MADRL) attracts vast attention in both academia and industry. However, there is a gap between virtual and physical environments in terms of how clean an observed state is. In addition, state adversarial attacks can seriously impact the performance of MADRL. Hence, how to improve the robustness of MADRL algorithms is an important research topic. In this article, we propose an optimal policy-based state adversary attack method that would make the MADRL algorithm more robust when it is applied in the training process of agents. Two case studies related to Industry 5.0 and a general case study are presented in which robustness training against the optimal adversarial attack is tested. The MADRL algorithms involved in the experiments include centralized training and decentralized execution (CTDE) framework and shared experience actor-critic (SEAC) to demonstrate the universality of our method.","PeriodicalId":48915,"journal":{"name":"IEEE Transactions on Systems Man Cybernetics-Systems","volume":"55 7","pages":"4957-4968"},"PeriodicalIF":8.7000,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Systems Man Cybernetics-Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10977657/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Industry 5.0 enhances manufacturing ability through efficient human-machine interaction, combining human resources and robots to complete tasks more accurately and effectively. Artificial intelligence (AI) plays an essential role in Industry 5.0. As a branch in AI, multiagent deep reinforcement learning (MADRL) attracts vast attention in both academia and industry. However, there is a gap between virtual and physical environments in terms of how clean an observed state is. In addition, state adversarial attacks can seriously impact the performance of MADRL. Hence, how to improve the robustness of MADRL algorithms is an important research topic. In this article, we propose an optimal policy-based state adversary attack method that would make the MADRL algorithm more robust when it is applied in the training process of agents. Two case studies related to Industry 5.0 and a general case study are presented in which robustness training against the optimal adversarial attack is tested. The MADRL algorithms involved in the experiments include centralized training and decentralized execution (CTDE) framework and shared experience actor-critic (SEAC) to demonstrate the universality of our method.
期刊介绍:
The IEEE Transactions on Systems, Man, and Cybernetics: Systems encompasses the fields of systems engineering, covering issue formulation, analysis, and modeling throughout the systems engineering lifecycle phases. It addresses decision-making, issue interpretation, systems management, processes, and various methods such as optimization, modeling, and simulation in the development and deployment of large systems.