Safe Multiagent Reinforcement Learning With Bilevel Optimization in Autonomous Driving

Zhi Zheng;Shangding Gu
{"title":"Safe Multiagent Reinforcement Learning With Bilevel Optimization in Autonomous Driving","authors":"Zhi Zheng;Shangding Gu","doi":"10.1109/TAI.2024.3497919","DOIUrl":null,"url":null,"abstract":"Ensuring safety in multiagent reinforcement learning (MARL), particularly when deploying it in real-world applications such as autonomous driving, emerges as a critical challenge. To address this challenge, traditional safe MARL methods extend MARL approaches to incorporate safety considerations, aiming to minimize safety risk values. However, these safe MARL algorithms often fail to model other agents and lack convergence guarantees, particularly in dynamically complex environments. In this study, we propose a safe MARL method grounded in a Stackelberg model with bilevel optimization, for which convergence analysis is provided. Derived from our theoretical analysis, we develop two practical algorithms, namely constrained Stackelberg Q-learning (CSQ) and constrained Stackelberg multiagent deep deterministic policy gradient (CS-MADDPG), designed to facilitate MARL decision-making in some simulated autonomous driving applications such as traffic management. To evaluate the effectiveness of our algorithms, we developed a safe MARL autonomous driving benchmark and conducted experiments on challenging autonomous driving scenarios, such as merges, roundabouts, intersections, and racetracks. The experimental results indicate that our algorithms, CSQ and CS-MADDPG, outperform several strong MARL baselines, such as Bi-AC, MACPO, and MAPPO-L, regarding reward and safety performance.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 4","pages":"829-842"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10752922/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Ensuring safety in multiagent reinforcement learning (MARL), particularly when deploying it in real-world applications such as autonomous driving, emerges as a critical challenge. To address this challenge, traditional safe MARL methods extend MARL approaches to incorporate safety considerations, aiming to minimize safety risk values. However, these safe MARL algorithms often fail to model other agents and lack convergence guarantees, particularly in dynamically complex environments. In this study, we propose a safe MARL method grounded in a Stackelberg model with bilevel optimization, for which convergence analysis is provided. Derived from our theoretical analysis, we develop two practical algorithms, namely constrained Stackelberg Q-learning (CSQ) and constrained Stackelberg multiagent deep deterministic policy gradient (CS-MADDPG), designed to facilitate MARL decision-making in some simulated autonomous driving applications such as traffic management. To evaluate the effectiveness of our algorithms, we developed a safe MARL autonomous driving benchmark and conducted experiments on challenging autonomous driving scenarios, such as merges, roundabouts, intersections, and racetracks. The experimental results indicate that our algorithms, CSQ and CS-MADDPG, outperform several strong MARL baselines, such as Bi-AC, MACPO, and MAPPO-L, regarding reward and safety performance.
自动驾驶安全多智能体强化学习与双层优化
确保多智能体强化学习(MARL)的安全性,特别是在自动驾驶等实际应用中部署时,已成为一项关键挑战。为了应对这一挑战,传统的安全MARL方法扩展了MARL方法,纳入了安全考虑因素,旨在将安全风险值降至最低。然而,这些安全的MARL算法往往不能对其他代理进行建模,并且缺乏收敛保证,特别是在动态复杂的环境中。在本研究中,我们提出了一种基于双层优化Stackelberg模型的安全MARL方法,并对其进行了收敛性分析。基于我们的理论分析,我们开发了两种实用算法,即约束Stackelberg Q-learning (CSQ)和约束Stackelberg多智能体深度确定性策略梯度(CS-MADDPG),旨在促进一些模拟自动驾驶应用(如交通管理)中的MARL决策。为了评估算法的有效性,我们开发了一个安全的MARL自动驾驶基准,并在具有挑战性的自动驾驶场景(如合并、环形交叉路口、交叉路口和赛道)上进行了实验。实验结果表明,我们的算法CSQ和CS-MADDPG在奖励和安全性能方面优于几个强大的MARL基准,如Bi-AC、MACPO和MAPPO-L。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.70
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信