A knowledge-informed deep learning paradigm for generaliz-able and stability-optimized car-following models

IF 14.5 Q1 TRANSPORTATION
Chengming Wang , Dongyao Jia , Wei Wang , Dong Ngoduy , Bei Peng , Jianping Wang
{"title":"A knowledge-informed deep learning paradigm for generaliz-able and stability-optimized car-following models","authors":"Chengming Wang ,&nbsp;Dongyao Jia ,&nbsp;Wei Wang ,&nbsp;Dong Ngoduy ,&nbsp;Bei Peng ,&nbsp;Jianping Wang","doi":"10.1016/j.commtr.2025.100211","DOIUrl":null,"url":null,"abstract":"<div><div>Car-following models (CFMs) are fundamental to traffic flow analysis and autonomous driving. Although calibrated physics-based and trained data-driven CFMs can replicate human driving behavior, their reliance on specific datasets limits generalization across diverse scenarios and reduces reliability in real-world deployment. In addition to behavioral fidelity, ensuring traffic stability is increasingly critical for the safe and efficient operation of autonomous vehicles (AVs), requiring CFMs that jointly address both objectives. However, existing models generally do not support a systematic integration of these goals. To bridge this gap, we propose a knowledge-informed deep learning (KIDL) paradigm that distills the generalization capabilities of pre-trained large language models (LLMs) into a lightweight and stability-aware neural architecture. LLMs are used to extract fundamental car-following knowledge beyond dataset-specific patterns, and this knowledge is transferred to a reliable, tractable, and computationally efficient model through knowledge distillation. KIDL also incorporates stability constraints directly into its training objective, ensuring that the resulting model not only emulates human-like behavior but also satisfies the local and string stability requirements essential for real-world AV deployment. We evaluate KIDL on the real-world NGSIM and HighD datasets, comparing its performance with representative physics-based, data-driven, and hybrid CFMs. Both empirical and theoretical results consistently demonstrate KIDL’s superior behavioral generalization and traffic flow stability, offering a robust and scalable solution for next-generation traffic systems.</div></div>","PeriodicalId":100292,"journal":{"name":"Communications in Transportation Research","volume":"5 ","pages":"Article 100211"},"PeriodicalIF":14.5000,"publicationDate":"2025-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Communications in Transportation Research","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772424725000514","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TRANSPORTATION","Score":null,"Total":0}
引用次数: 0

Abstract

Car-following models (CFMs) are fundamental to traffic flow analysis and autonomous driving. Although calibrated physics-based and trained data-driven CFMs can replicate human driving behavior, their reliance on specific datasets limits generalization across diverse scenarios and reduces reliability in real-world deployment. In addition to behavioral fidelity, ensuring traffic stability is increasingly critical for the safe and efficient operation of autonomous vehicles (AVs), requiring CFMs that jointly address both objectives. However, existing models generally do not support a systematic integration of these goals. To bridge this gap, we propose a knowledge-informed deep learning (KIDL) paradigm that distills the generalization capabilities of pre-trained large language models (LLMs) into a lightweight and stability-aware neural architecture. LLMs are used to extract fundamental car-following knowledge beyond dataset-specific patterns, and this knowledge is transferred to a reliable, tractable, and computationally efficient model through knowledge distillation. KIDL also incorporates stability constraints directly into its training objective, ensuring that the resulting model not only emulates human-like behavior but also satisfies the local and string stability requirements essential for real-world AV deployment. We evaluate KIDL on the real-world NGSIM and HighD datasets, comparing its performance with representative physics-based, data-driven, and hybrid CFMs. Both empirical and theoretical results consistently demonstrate KIDL’s superior behavioral generalization and traffic flow stability, offering a robust and scalable solution for next-generation traffic systems.
一种基于知识的深度学习范式,用于泛化和稳定性优化的汽车跟随模型
汽车跟随模型(cfm)是交通流分析和自动驾驶的基础。尽管经过校准的基于物理的和经过训练的数据驱动的cfm可以复制人类驾驶行为,但它们对特定数据集的依赖限制了在不同场景下的泛化,降低了实际部署中的可靠性。除了行为保真度之外,确保交通稳定性对于自动驾驶汽车(av)的安全高效运行越来越重要,这就要求cfm能够同时满足这两个目标。然而,现有的模型通常不支持这些目标的系统集成。为了弥补这一差距,我们提出了一种知识知情的深度学习(KIDL)范式,该范式将预训练的大型语言模型(llm)的泛化能力提炼为轻量级且具有稳定性意识的神经架构。llm用于提取数据集特定模式之外的基本汽车跟随知识,并通过知识蒸馏将这些知识转换为可靠、可处理且计算效率高的模型。KIDL还将稳定性约束直接纳入其训练目标,确保生成的模型不仅能模拟类似人类的行为,还能满足实际自动驾驶部署中必不可少的局部和串稳定性要求。我们在真实世界的NGSIM和HighD数据集上评估了KIDL,将其性能与代表性的基于物理的、数据驱动的和混合cfm进行了比较。经验和理论结果一致证明了KIDL优越的行为泛化和交通流稳定性,为下一代交通系统提供了强大且可扩展的解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
15.20
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信