用于自主超车操纵的动态选项策略分层深度强化学习模型

IF 7.9 1区 工程技术 Q1 ENGINEERING, CIVIL
Shikhar Singh Lodhi;Neetesh Kumar;Pradumn Kumar Pandey
{"title":"用于自主超车操纵的动态选项策略分层深度强化学习模型","authors":"Shikhar Singh Lodhi;Neetesh Kumar;Pradumn Kumar Pandey","doi":"10.1109/TITS.2025.3536020","DOIUrl":null,"url":null,"abstract":"Driving an Autonomous Vehicle (AV) in dynamic traffic is a critical task, as the overtaking maneuver being considered one of the most complex due to involvement of several sub-maneuvers. Recent advances in Deep Reinforcement Learning (DRL) have resulted in AVs exhibiting exceptional performance in addressing overtaking-related challenges. However, the intricate nature of the overtaking presents difficulties for a RL agent to proficiently handle all its sub-maneuvers that include left lane change, right lane change and straight drive. Furthermore, the dynamic traffic restricts the RL agents to execute the sub-maneuvers at critical checkpoints involved in overtaking. To address this, we propose an approach inspired by semi-Markov options, called Dynamic Option Policy enabled Hierarchical Deep Reinforcement Learning (DOP-HDRL). This innovative approach allows the selection of sub-maneuver agents using a single dynamic option policy, while employing individual DRL agents specifically trained for each sub-maneuver to perform tasks during overtaking in dynamic environments. By breaking down overtaking maneuvers into several sub-maneuvers and controlling them using a single policy, the DOP-HDRL approach reduces training time and computational load compared to classical DRL agents. Moreover, DOP-HDRL easily integrates basic traffic safety rules into overtaking maneuvers to offer more robust solutions. The DOP-HDRL approach is rigorously evaluated through multiple overtaking and non-overtaking scenarios inspired by the National Highway Traffic Safety Administration (NHTSA) pre-crash scenarios in the CARLA simulator. On an average, the DOP-HDRL approach shows 100% completion rate, 14% least collision rate, 25% optimal clearance distance, and 7% more average speed compared to the state-of-the-art methods.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 4","pages":"5018-5029"},"PeriodicalIF":7.9000,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Dynamic Option Policy Enabled Hierarchical Deep Reinforcement Learning Model for Autonomous Overtaking Maneuver\",\"authors\":\"Shikhar Singh Lodhi;Neetesh Kumar;Pradumn Kumar Pandey\",\"doi\":\"10.1109/TITS.2025.3536020\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Driving an Autonomous Vehicle (AV) in dynamic traffic is a critical task, as the overtaking maneuver being considered one of the most complex due to involvement of several sub-maneuvers. Recent advances in Deep Reinforcement Learning (DRL) have resulted in AVs exhibiting exceptional performance in addressing overtaking-related challenges. However, the intricate nature of the overtaking presents difficulties for a RL agent to proficiently handle all its sub-maneuvers that include left lane change, right lane change and straight drive. Furthermore, the dynamic traffic restricts the RL agents to execute the sub-maneuvers at critical checkpoints involved in overtaking. To address this, we propose an approach inspired by semi-Markov options, called Dynamic Option Policy enabled Hierarchical Deep Reinforcement Learning (DOP-HDRL). This innovative approach allows the selection of sub-maneuver agents using a single dynamic option policy, while employing individual DRL agents specifically trained for each sub-maneuver to perform tasks during overtaking in dynamic environments. By breaking down overtaking maneuvers into several sub-maneuvers and controlling them using a single policy, the DOP-HDRL approach reduces training time and computational load compared to classical DRL agents. Moreover, DOP-HDRL easily integrates basic traffic safety rules into overtaking maneuvers to offer more robust solutions. The DOP-HDRL approach is rigorously evaluated through multiple overtaking and non-overtaking scenarios inspired by the National Highway Traffic Safety Administration (NHTSA) pre-crash scenarios in the CARLA simulator. On an average, the DOP-HDRL approach shows 100% completion rate, 14% least collision rate, 25% optimal clearance distance, and 7% more average speed compared to the state-of-the-art methods.\",\"PeriodicalId\":13416,\"journal\":{\"name\":\"IEEE Transactions on Intelligent Transportation Systems\",\"volume\":\"26 4\",\"pages\":\"5018-5029\"},\"PeriodicalIF\":7.9000,\"publicationDate\":\"2025-02-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Intelligent Transportation Systems\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10887395/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, CIVIL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Intelligent Transportation Systems","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10887395/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, CIVIL","Score":null,"Total":0}
引用次数: 0

摘要

在动态交通中驾驶自动驾驶汽车(AV)是一项至关重要的任务,因为超车动作被认为是最复杂的动作之一,因为涉及到几个子动作。深度强化学习(DRL)的最新进展使自动驾驶汽车在解决超车相关挑战方面表现出色。然而,超车的复杂性给RL代理熟练处理包括左变道、右变道和直行在内的所有子机动带来了困难。此外,动态交通限制了RL agent在超车关键检查点执行子机动。为了解决这个问题,我们提出了一种受半马尔可夫选项启发的方法,称为动态选项策略支持分层深度强化学习(DOP-HDRL)。这种创新的方法允许使用单个动态选项策略选择子机动代理,同时使用针对每个子机动专门训练的单个DRL代理在动态环境中超车时执行任务。dophdrl方法通过将超车动作分解为多个子动作并使用单个策略进行控制,与传统的DRL代理相比,减少了训练时间和计算量。此外,dophdrl可以轻松地将基本交通安全规则集成到超车操作中,从而提供更强大的解决方案。受美国国家公路交通安全管理局(NHTSA)碰撞前场景启发,在CARLA模拟器中通过多种超车和非超车场景对DOP-HDRL方法进行了严格评估。平均而言,与最先进的方法相比,dophdrl方法的完井率为100%,碰撞率最小为14%,最佳间隙距离为25%,平均速度提高7%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Dynamic Option Policy Enabled Hierarchical Deep Reinforcement Learning Model for Autonomous Overtaking Maneuver
Driving an Autonomous Vehicle (AV) in dynamic traffic is a critical task, as the overtaking maneuver being considered one of the most complex due to involvement of several sub-maneuvers. Recent advances in Deep Reinforcement Learning (DRL) have resulted in AVs exhibiting exceptional performance in addressing overtaking-related challenges. However, the intricate nature of the overtaking presents difficulties for a RL agent to proficiently handle all its sub-maneuvers that include left lane change, right lane change and straight drive. Furthermore, the dynamic traffic restricts the RL agents to execute the sub-maneuvers at critical checkpoints involved in overtaking. To address this, we propose an approach inspired by semi-Markov options, called Dynamic Option Policy enabled Hierarchical Deep Reinforcement Learning (DOP-HDRL). This innovative approach allows the selection of sub-maneuver agents using a single dynamic option policy, while employing individual DRL agents specifically trained for each sub-maneuver to perform tasks during overtaking in dynamic environments. By breaking down overtaking maneuvers into several sub-maneuvers and controlling them using a single policy, the DOP-HDRL approach reduces training time and computational load compared to classical DRL agents. Moreover, DOP-HDRL easily integrates basic traffic safety rules into overtaking maneuvers to offer more robust solutions. The DOP-HDRL approach is rigorously evaluated through multiple overtaking and non-overtaking scenarios inspired by the National Highway Traffic Safety Administration (NHTSA) pre-crash scenarios in the CARLA simulator. On an average, the DOP-HDRL approach shows 100% completion rate, 14% least collision rate, 25% optimal clearance distance, and 7% more average speed compared to the state-of-the-art methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Intelligent Transportation Systems
IEEE Transactions on Intelligent Transportation Systems 工程技术-工程:电子与电气
CiteScore
14.80
自引率
12.90%
发文量
1872
审稿时长
7.5 months
期刊介绍: The theoretical, experimental and operational aspects of electrical and electronics engineering and information technologies as applied to Intelligent Transportation Systems (ITS). Intelligent Transportation Systems are defined as those systems utilizing synergistic technologies and systems engineering concepts to develop and improve transportation systems of all kinds. The scope of this interdisciplinary activity includes the promotion, consolidation and coordination of ITS technical activities among IEEE entities, and providing a focus for cooperative activities, both internally and externally.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信