{"title":"用于 CAV 的混合动力汽车跟车控制:整合线性反馈和深度强化学习以稳定混合交通","authors":"Ximin Yue , Haotian Shi , Yang Zhou , Zihao Li","doi":"10.1016/j.trc.2024.104773","DOIUrl":null,"url":null,"abstract":"<div><p>This paper introduces a novel hybrid car-following strategy for connected automated vehicles (CAVs) to mitigate traffic oscillations while simultaneously improving CAV car-following (CF) distance-maintaining efficiencies. To achieve this, our proposed control framework integrates two controllers: a linear feedback controller and a deep reinforcement learning controller. Firstly, a cutting-edge linear feedback controller is developed by non-linear programming to maximally dampen traffic oscillations in the frequency domain while ensuring both local and string stability. Based on that, deep reinforcement learning (DRL) is employed to complement the linear feedback controller further to handle the unknown traffic disturbance quasi-optimally in the time domain. This unique approach enhances the control stability of the traditional DRL approach and provides an innovative perspective on CF control. Simulation experiments were conducted to validate the efficacy of our control strategy. The results demonstrate superior performance in terms of training convergence, driving comfort, and dampening oscillations compared to existing DRL-based controllers.</p></div>","PeriodicalId":54417,"journal":{"name":"Transportation Research Part C-Emerging Technologies","volume":null,"pages":null},"PeriodicalIF":7.6000,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Hybrid car following control for CAVs: Integrating linear feedback and deep reinforcement learning to stabilize mixed traffic\",\"authors\":\"Ximin Yue , Haotian Shi , Yang Zhou , Zihao Li\",\"doi\":\"10.1016/j.trc.2024.104773\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>This paper introduces a novel hybrid car-following strategy for connected automated vehicles (CAVs) to mitigate traffic oscillations while simultaneously improving CAV car-following (CF) distance-maintaining efficiencies. To achieve this, our proposed control framework integrates two controllers: a linear feedback controller and a deep reinforcement learning controller. Firstly, a cutting-edge linear feedback controller is developed by non-linear programming to maximally dampen traffic oscillations in the frequency domain while ensuring both local and string stability. Based on that, deep reinforcement learning (DRL) is employed to complement the linear feedback controller further to handle the unknown traffic disturbance quasi-optimally in the time domain. This unique approach enhances the control stability of the traditional DRL approach and provides an innovative perspective on CF control. Simulation experiments were conducted to validate the efficacy of our control strategy. The results demonstrate superior performance in terms of training convergence, driving comfort, and dampening oscillations compared to existing DRL-based controllers.</p></div>\",\"PeriodicalId\":54417,\"journal\":{\"name\":\"Transportation Research Part C-Emerging Technologies\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.6000,\"publicationDate\":\"2024-08-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Transportation Research Part C-Emerging Technologies\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0968090X24002948\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"TRANSPORTATION SCIENCE & TECHNOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transportation Research Part C-Emerging Technologies","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0968090X24002948","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TRANSPORTATION SCIENCE & TECHNOLOGY","Score":null,"Total":0}
Hybrid car following control for CAVs: Integrating linear feedback and deep reinforcement learning to stabilize mixed traffic
This paper introduces a novel hybrid car-following strategy for connected automated vehicles (CAVs) to mitigate traffic oscillations while simultaneously improving CAV car-following (CF) distance-maintaining efficiencies. To achieve this, our proposed control framework integrates two controllers: a linear feedback controller and a deep reinforcement learning controller. Firstly, a cutting-edge linear feedback controller is developed by non-linear programming to maximally dampen traffic oscillations in the frequency domain while ensuring both local and string stability. Based on that, deep reinforcement learning (DRL) is employed to complement the linear feedback controller further to handle the unknown traffic disturbance quasi-optimally in the time domain. This unique approach enhances the control stability of the traditional DRL approach and provides an innovative perspective on CF control. Simulation experiments were conducted to validate the efficacy of our control strategy. The results demonstrate superior performance in terms of training convergence, driving comfort, and dampening oscillations compared to existing DRL-based controllers.
期刊介绍:
Transportation Research: Part C (TR_C) is dedicated to showcasing high-quality, scholarly research that delves into the development, applications, and implications of transportation systems and emerging technologies. Our focus lies not solely on individual technologies, but rather on their broader implications for the planning, design, operation, control, maintenance, and rehabilitation of transportation systems, services, and components. In essence, the intellectual core of the journal revolves around the transportation aspect rather than the technology itself. We actively encourage the integration of quantitative methods from diverse fields such as operations research, control systems, complex networks, computer science, and artificial intelligence. Join us in exploring the intersection of transportation systems and emerging technologies to drive innovation and progress in the field.