Rongliang Zhou , Jiakun Huang , Mingjun Li , Hepeng Li , Haotian Cao , Xiaolin Song
{"title":"Knowledge transfer from simple to complex: A safe and efficient reinforcement learning framework for autonomous driving decision-making","authors":"Rongliang Zhou , Jiakun Huang , Mingjun Li , Hepeng Li , Haotian Cao , Xiaolin Song","doi":"10.1016/j.aei.2025.103188","DOIUrl":null,"url":null,"abstract":"<div><div>A safe and efficient decision-making system is crucial for autonomous vehicles. However, the complexity of driving environments often limits the effectiveness of many rule-based and machine learning approaches. Reinforcement learning (RL), with its robust self-learning capabilities and adaptability to diverse environments, offers a promising solution. Despite this, concerns about safety and efficiency during the training phase have hindered its widespread adoption. To address these challenges, we propose a novel RL framework, Simple to Complex Collaborative Decision (S2CD), based on the Teacher–Student Framework (TSF) to facilitate safe and efficient knowledge transfer. In this approach, the teacher model is first trained rapidly in a lightweight simulation environment. During the training of the student model in more complex environments, the teacher evaluates the student’s selected actions to prevent suboptimal behavior. Besides, to enhance performance further, we introduce an RL algorithm called Adaptive Clipping Proximal Policy Optimization Plus (ACPPO+), which combines samples from both teacher and student policies while utilizing dynamic clipping strategies based on sample importance. This approach improves sample efficiency and mitigates data imbalance. Additionally, Kullback–Leibler (KL) divergence is employed as a policy constraint to accelerate the student’s learning process. A gradual weaning strategy is then used to enable the student to explore independently, overcoming the limitations of the teacher. Moreover, to provide model interpretability, the Layer-wise Relevance Propagation (LRP) technique is applied. Simulation experiments conducted in highway lane-change scenarios demonstrate that S2CD significantly enhances training efficiency and safety while reducing training costs. Even when guided by suboptimal teachers, the student consistently outperforms expectations, showcasing the robustness and effectiveness of the S2CD framework.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"65 ","pages":"Article 103188"},"PeriodicalIF":8.0000,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced Engineering Informatics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1474034625000813","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
A safe and efficient decision-making system is crucial for autonomous vehicles. However, the complexity of driving environments often limits the effectiveness of many rule-based and machine learning approaches. Reinforcement learning (RL), with its robust self-learning capabilities and adaptability to diverse environments, offers a promising solution. Despite this, concerns about safety and efficiency during the training phase have hindered its widespread adoption. To address these challenges, we propose a novel RL framework, Simple to Complex Collaborative Decision (S2CD), based on the Teacher–Student Framework (TSF) to facilitate safe and efficient knowledge transfer. In this approach, the teacher model is first trained rapidly in a lightweight simulation environment. During the training of the student model in more complex environments, the teacher evaluates the student’s selected actions to prevent suboptimal behavior. Besides, to enhance performance further, we introduce an RL algorithm called Adaptive Clipping Proximal Policy Optimization Plus (ACPPO+), which combines samples from both teacher and student policies while utilizing dynamic clipping strategies based on sample importance. This approach improves sample efficiency and mitigates data imbalance. Additionally, Kullback–Leibler (KL) divergence is employed as a policy constraint to accelerate the student’s learning process. A gradual weaning strategy is then used to enable the student to explore independently, overcoming the limitations of the teacher. Moreover, to provide model interpretability, the Layer-wise Relevance Propagation (LRP) technique is applied. Simulation experiments conducted in highway lane-change scenarios demonstrate that S2CD significantly enhances training efficiency and safety while reducing training costs. Even when guided by suboptimal teachers, the student consistently outperforms expectations, showcasing the robustness and effectiveness of the S2CD framework.
期刊介绍:
Advanced Engineering Informatics is an international Journal that solicits research papers with an emphasis on 'knowledge' and 'engineering applications'. The Journal seeks original papers that report progress in applying methods of engineering informatics. These papers should have engineering relevance and help provide a scientific base for more reliable, spontaneous, and creative engineering decision-making. Additionally, papers should demonstrate the science of supporting knowledge-intensive engineering tasks and validate the generality, power, and scalability of new methods through rigorous evaluation, preferably both qualitatively and quantitatively. Abstracting and indexing for Advanced Engineering Informatics include Science Citation Index Expanded, Scopus and INSPEC.