{"title":"Safe Multiagent Reinforcement Learning With Bilevel Optimization in Autonomous Driving","authors":"Zhi Zheng;Shangding Gu","doi":"10.1109/TAI.2024.3497919","DOIUrl":null,"url":null,"abstract":"Ensuring safety in multiagent reinforcement learning (MARL), particularly when deploying it in real-world applications such as autonomous driving, emerges as a critical challenge. To address this challenge, traditional safe MARL methods extend MARL approaches to incorporate safety considerations, aiming to minimize safety risk values. However, these safe MARL algorithms often fail to model other agents and lack convergence guarantees, particularly in dynamically complex environments. In this study, we propose a safe MARL method grounded in a Stackelberg model with bilevel optimization, for which convergence analysis is provided. Derived from our theoretical analysis, we develop two practical algorithms, namely constrained Stackelberg Q-learning (CSQ) and constrained Stackelberg multiagent deep deterministic policy gradient (CS-MADDPG), designed to facilitate MARL decision-making in some simulated autonomous driving applications such as traffic management. To evaluate the effectiveness of our algorithms, we developed a safe MARL autonomous driving benchmark and conducted experiments on challenging autonomous driving scenarios, such as merges, roundabouts, intersections, and racetracks. The experimental results indicate that our algorithms, CSQ and CS-MADDPG, outperform several strong MARL baselines, such as Bi-AC, MACPO, and MAPPO-L, regarding reward and safety performance.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 4","pages":"829-842"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10752922/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Ensuring safety in multiagent reinforcement learning (MARL), particularly when deploying it in real-world applications such as autonomous driving, emerges as a critical challenge. To address this challenge, traditional safe MARL methods extend MARL approaches to incorporate safety considerations, aiming to minimize safety risk values. However, these safe MARL algorithms often fail to model other agents and lack convergence guarantees, particularly in dynamically complex environments. In this study, we propose a safe MARL method grounded in a Stackelberg model with bilevel optimization, for which convergence analysis is provided. Derived from our theoretical analysis, we develop two practical algorithms, namely constrained Stackelberg Q-learning (CSQ) and constrained Stackelberg multiagent deep deterministic policy gradient (CS-MADDPG), designed to facilitate MARL decision-making in some simulated autonomous driving applications such as traffic management. To evaluate the effectiveness of our algorithms, we developed a safe MARL autonomous driving benchmark and conducted experiments on challenging autonomous driving scenarios, such as merges, roundabouts, intersections, and racetracks. The experimental results indicate that our algorithms, CSQ and CS-MADDPG, outperform several strong MARL baselines, such as Bi-AC, MACPO, and MAPPO-L, regarding reward and safety performance.