基于深度强化学习的网联车辆横向流控制

Abdul Rahman Kreidieh, Y. Farid, K. Oguchi
{"title":"基于深度强化学习的网联车辆横向流控制","authors":"Abdul Rahman Kreidieh, Y. Farid, K. Oguchi","doi":"10.1109/IV55152.2023.10186790","DOIUrl":null,"url":null,"abstract":"Coordinated lane-assignment strategies offer promising solutions for improving traffic conditions. By anticipating and re-positioning connected vehicles in response to potential downstream events, such systems can greatly improve the safety and efficiency of existing networks. Assigning said decisions, however, grows exponentially more complex as the scale of target networks expands. In this paper, we explore solutions to optimal lane assignment at the macroscopic level of traffic, whereby decisions are aggregated across multiple vehicles clustered spatially into sections. This approach reduces some of the challenges around scalability, but introduces dynamical interactions at the microscopic level that render higher-level decision-making complexities. To this point, we provide results demonstrating that reinforcement learning (RL) strategies are capable of generating responses that efficiently coordinate the lateral flow of vehicles across multiple road sections. In particular, we find that RL methods can robustly identify and maneuver vehicles around bottlenecks placed randomly within a given network, and in doing so substantively reduce the the traveling time for both human-driven and connected vehicles.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"283 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Lateral flow control of connected vehicles through deep reinforcement learning\",\"authors\":\"Abdul Rahman Kreidieh, Y. Farid, K. Oguchi\",\"doi\":\"10.1109/IV55152.2023.10186790\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Coordinated lane-assignment strategies offer promising solutions for improving traffic conditions. By anticipating and re-positioning connected vehicles in response to potential downstream events, such systems can greatly improve the safety and efficiency of existing networks. Assigning said decisions, however, grows exponentially more complex as the scale of target networks expands. In this paper, we explore solutions to optimal lane assignment at the macroscopic level of traffic, whereby decisions are aggregated across multiple vehicles clustered spatially into sections. This approach reduces some of the challenges around scalability, but introduces dynamical interactions at the microscopic level that render higher-level decision-making complexities. To this point, we provide results demonstrating that reinforcement learning (RL) strategies are capable of generating responses that efficiently coordinate the lateral flow of vehicles across multiple road sections. In particular, we find that RL methods can robustly identify and maneuver vehicles around bottlenecks placed randomly within a given network, and in doing so substantively reduce the the traveling time for both human-driven and connected vehicles.\",\"PeriodicalId\":195148,\"journal\":{\"name\":\"2023 IEEE Intelligent Vehicles Symposium (IV)\",\"volume\":\"283 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE Intelligent Vehicles Symposium (IV)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IV55152.2023.10186790\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Intelligent Vehicles Symposium (IV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IV55152.2023.10186790","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

协调车道分配策略为改善交通状况提供了有希望的解决方案。通过预测和重新定位联网车辆以应对潜在的下游事件,该系统可以大大提高现有网络的安全性和效率。然而,随着目标网络规模的扩大,分配上述决策的复杂性呈指数级增长。在本文中,我们探索了宏观交通水平上的最优车道分配的解决方案,其中决策是在空间上聚集成部分的多辆汽车上汇总的。这种方法减少了围绕可伸缩性的一些挑战,但在微观层面引入了动态交互,从而导致更高层次的决策复杂性。在这一点上,我们提供的结果表明,强化学习(RL)策略能够产生响应,有效地协调车辆在多个路段的横向流动。特别是,我们发现强化学习方法可以鲁棒地识别和操纵车辆绕过给定网络中随机放置的瓶颈,并且这样做可以大大减少人工驾驶和联网车辆的行驶时间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Lateral flow control of connected vehicles through deep reinforcement learning
Coordinated lane-assignment strategies offer promising solutions for improving traffic conditions. By anticipating and re-positioning connected vehicles in response to potential downstream events, such systems can greatly improve the safety and efficiency of existing networks. Assigning said decisions, however, grows exponentially more complex as the scale of target networks expands. In this paper, we explore solutions to optimal lane assignment at the macroscopic level of traffic, whereby decisions are aggregated across multiple vehicles clustered spatially into sections. This approach reduces some of the challenges around scalability, but introduces dynamical interactions at the microscopic level that render higher-level decision-making complexities. To this point, we provide results demonstrating that reinforcement learning (RL) strategies are capable of generating responses that efficiently coordinate the lateral flow of vehicles across multiple road sections. In particular, we find that RL methods can robustly identify and maneuver vehicles around bottlenecks placed randomly within a given network, and in doing so substantively reduce the the traveling time for both human-driven and connected vehicles.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信