{"title":"Combining Deep Reinforcement Learning with Rule-based Constraints for Safe Highway Driving","authors":"Tingting Liu, Qianqian Liu, Hanxiao Liu, Xiaoqiang Ren","doi":"10.1109/CAC57257.2022.10055747","DOIUrl":null,"url":null,"abstract":"Deep reinforcement learning (DRL) has been employed in solving challenging decision-making problems in autonomous driving. Safe decision-making in autonomous highway driving is among the foremost open problems due to the highly evolving driving environments and the influence of surrounding road users. In this paper, we present a powerful safe framework, which leverages the merits of both rule-based constraints and DRL for safety assurance. We model the highway scenario as a Markov Decision Process (MDP) and apply the deep Q-network (DQN) algorithm to optimize the driving performance. Moreover, a multi-head attention mechanism is introduced as a way to observe that vehicles with strong interactions make a difference in the decision-making of the ego vehicle, which can enhance the safety of the ego vehicle under complex highway driving environments. We also implement a safety module based on common traffic practices to ensure a minimum relative distance between two vehicles. This safety module will serve as feedback on the action of the DRL agent. If the action leads to risk, it will be replaced by a safer one and a negative reward will be assigned. The test and evaluation for our approach in a three-lane highway driving scenario have been done. The experiment results indicate that the proposed framework is capable of reducing the collision rate and accelerating the learning process.","PeriodicalId":287137,"journal":{"name":"2022 China Automation Congress (CAC)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 China Automation Congress (CAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CAC57257.2022.10055747","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep reinforcement learning (DRL) has been employed in solving challenging decision-making problems in autonomous driving. Safe decision-making in autonomous highway driving is among the foremost open problems due to the highly evolving driving environments and the influence of surrounding road users. In this paper, we present a powerful safe framework, which leverages the merits of both rule-based constraints and DRL for safety assurance. We model the highway scenario as a Markov Decision Process (MDP) and apply the deep Q-network (DQN) algorithm to optimize the driving performance. Moreover, a multi-head attention mechanism is introduced as a way to observe that vehicles with strong interactions make a difference in the decision-making of the ego vehicle, which can enhance the safety of the ego vehicle under complex highway driving environments. We also implement a safety module based on common traffic practices to ensure a minimum relative distance between two vehicles. This safety module will serve as feedback on the action of the DRL agent. If the action leads to risk, it will be replaced by a safer one and a negative reward will be assigned. The test and evaluation for our approach in a three-lane highway driving scenario have been done. The experiment results indicate that the proposed framework is capable of reducing the collision rate and accelerating the learning process.