Chunyu Liu , Zihao Sheng , Pei Li , Sikai Chen , Xia Luo , Bin Ran
{"title":"A distributed deep reinforcement learning-based longitudinal control strategy for connected automated vehicles combining attention mechanism","authors":"Chunyu Liu , Zihao Sheng , Pei Li , Sikai Chen , Xia Luo , Bin Ran","doi":"10.1080/19427867.2024.2335084","DOIUrl":null,"url":null,"abstract":"<div><div>With the rapid development of connected automated vehicles (CAVs), the trajectory control of CAVs has become a focus in traffic engineering. This paper proposes a distributed deep reinforcement learning-based longitudinal control strategy for CAVs combining attention mechanism, which enhances the stability of mixed traffic, car-following efficiency, energy efficiency, and safety. A longitudinal control strategy is built using a deep reinforcement learning model. The CAVs gradually learn optimal car-following strategy in training process to improve safety, stability, fuel economy, mobility, and driving comfort. To further capture the interactions among vehicles in each platoon, the graph attention network is introduced to facilitate the car-following control strategy. To verify the effectiveness of the proposed method, a comparative analysis is conducted, which indicates that the proposed method can dramatically dampen oscillations, enhance traffic efficiency, reduce fuel consumption, and improve driving safety under different scenarios.</div></div>","PeriodicalId":48974,"journal":{"name":"Transportation Letters-The International Journal of Transportation Research","volume":"17 2","pages":"Pages 183-199"},"PeriodicalIF":3.3000,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transportation Letters-The International Journal of Transportation Research","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/org/science/article/pii/S1942786724000195","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"TRANSPORTATION","Score":null,"Total":0}
引用次数: 0
Abstract
With the rapid development of connected automated vehicles (CAVs), the trajectory control of CAVs has become a focus in traffic engineering. This paper proposes a distributed deep reinforcement learning-based longitudinal control strategy for CAVs combining attention mechanism, which enhances the stability of mixed traffic, car-following efficiency, energy efficiency, and safety. A longitudinal control strategy is built using a deep reinforcement learning model. The CAVs gradually learn optimal car-following strategy in training process to improve safety, stability, fuel economy, mobility, and driving comfort. To further capture the interactions among vehicles in each platoon, the graph attention network is introduced to facilitate the car-following control strategy. To verify the effectiveness of the proposed method, a comparative analysis is conducted, which indicates that the proposed method can dramatically dampen oscillations, enhance traffic efficiency, reduce fuel consumption, and improve driving safety under different scenarios.
期刊介绍:
Transportation Letters: The International Journal of Transportation Research is a quarterly journal that publishes high-quality peer-reviewed and mini-review papers as well as technical notes and book reviews on the state-of-the-art in transportation research.
The focus of Transportation Letters is on analytical and empirical findings, methodological papers, and theoretical and conceptual insights across all areas of research. Review resource papers that merge descriptions of the state-of-the-art with innovative and new methodological, theoretical, and conceptual insights spanning all areas of transportation research are invited and of particular interest.