Adaptive Traffic Signal’s Safety and Efficiency Improvement by Multi-Objective Deep Reinforcement Learning Approach

Shahin Mirbakhsh, Mahdi Azizi
{"title":"Adaptive Traffic Signal’s Safety and Efficiency Improvement by Multi-Objective Deep Reinforcement Learning Approach","authors":"Shahin Mirbakhsh, Mahdi Azizi","doi":"10.58806/ijirme.2024.v3i7n10","DOIUrl":null,"url":null,"abstract":"This research introduces an innovative method for adaptive traffic signal control (ATSC) through the utilization of multi-objective deep reinforcement learning (DRL) techniques. The proposed approach aims to enhance control strategies at intersections while simultaneously addressing safety, efficiency, and decarbonization objectives. Traditional ATSC methods typically prioritize traffic efficiency and often struggle to adapt to real-time dynamic traffic conditions. To address these challenges, the study suggests a DRL-based ATSC algorithm that incorporates the Dueling Double Deep Q Network (D3QN) framework. The performance of this algorithm is assessed using a simulated intersection in Changsha, China. Notably, the proposed ATSC algorithm surpasses both traditional ATSC and ATSC algorithms focused solely on efficiency optimization by achieving over a 16% reduction in traffic conflicts and a 4% decrease in carbon emissions. Regarding traffic efficiency, waiting time is reduced by 18% compared to traditional ATSC, albeit showing a slight increase (0.64%) compared to the DRL-based ATSC algorithm integrating the D3QN framework. This marginal increase suggests a trade-off between efficiency and other objectives like safety and decarbonization. Additionally, the proposed approach demonstrates superior performance, particularly in scenarios with high traffic demand, across all three objectives. These findings contribute to advancing traffic control systems by offering a practical and effective solution for optimizing signal control strategies in real-world traffic situations.","PeriodicalId":183155,"journal":{"name":"International Journal of Innovative Research in Multidisciplinary Education","volume":"25 13","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Innovative Research in Multidisciplinary Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.58806/ijirme.2024.v3i7n10","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This research introduces an innovative method for adaptive traffic signal control (ATSC) through the utilization of multi-objective deep reinforcement learning (DRL) techniques. The proposed approach aims to enhance control strategies at intersections while simultaneously addressing safety, efficiency, and decarbonization objectives. Traditional ATSC methods typically prioritize traffic efficiency and often struggle to adapt to real-time dynamic traffic conditions. To address these challenges, the study suggests a DRL-based ATSC algorithm that incorporates the Dueling Double Deep Q Network (D3QN) framework. The performance of this algorithm is assessed using a simulated intersection in Changsha, China. Notably, the proposed ATSC algorithm surpasses both traditional ATSC and ATSC algorithms focused solely on efficiency optimization by achieving over a 16% reduction in traffic conflicts and a 4% decrease in carbon emissions. Regarding traffic efficiency, waiting time is reduced by 18% compared to traditional ATSC, albeit showing a slight increase (0.64%) compared to the DRL-based ATSC algorithm integrating the D3QN framework. This marginal increase suggests a trade-off between efficiency and other objectives like safety and decarbonization. Additionally, the proposed approach demonstrates superior performance, particularly in scenarios with high traffic demand, across all three objectives. These findings contribute to advancing traffic control systems by offering a practical and effective solution for optimizing signal control strategies in real-world traffic situations.
通过多目标深度强化学习方法提高自适应交通信号的安全性和效率
本研究通过利用多目标深度强化学习(DRL)技术,为自适应交通信号控制(ATSC)引入了一种创新方法。所提出的方法旨在加强交叉路口的控制策略,同时实现安全、效率和去碳化目标。传统的 ATSC 方法通常优先考虑交通效率,往往难以适应实时动态交通状况。为应对这些挑战,本研究提出了一种基于 DRL 的 ATSC 算法,该算法结合了决斗双深 Q 网络(D3QN)框架。该算法的性能通过中国长沙的一个模拟交叉路口进行了评估。值得注意的是,所提出的 ATSC 算法超越了传统 ATSC 算法和只注重效率优化的 ATSC 算法,交通冲突减少了 16% 以上,碳排放减少了 4%。在交通效率方面,与传统 ATSC 相比,等待时间减少了 18%,尽管与基于 DRL 的 ATSC 算法(集成了 D3QN 框架)相比,等待时间略有增加(0.64%)。这种微弱的增长表明,在效率和其他目标(如安全和去碳化)之间需要进行权衡。此外,所提出的方法在所有三个目标上都表现出卓越的性能,尤其是在交通需求量大的场景中。这些研究结果为优化现实交通状况下的信号控制策略提供了实用有效的解决方案,有助于推动交通控制系统的发展。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信