CASAformer:用于交通速度预测的拥塞感知稀疏注意力转换器

IF 12.5 Q1 TRANSPORTATION
Yifan Zhang , Qishen Zhou , Jianping Wang , Anastasios Kouvelas , Michail A. Makridis
{"title":"CASAformer:用于交通速度预测的拥塞感知稀疏注意力转换器","authors":"Yifan Zhang ,&nbsp;Qishen Zhou ,&nbsp;Jianping Wang ,&nbsp;Anastasios Kouvelas ,&nbsp;Michail A. Makridis","doi":"10.1016/j.commtr.2025.100174","DOIUrl":null,"url":null,"abstract":"<div><div>Accurate and efficient traffic speed prediction is crucial for improving roaDongguand safety and efficiency. With the emerging deep learning and extensive traffic data, data-driven methods are widely adopted to achieve this task with increasingly complicated structures and progressively deeper layers of neural networks. Despite the design of the models, they aim to optimize the overall average performance without discriminating against different traffic states. However, the fact is that predicting the traffic speed under congestion is normally more important than under free flow since the downstream tasks, such as traffic control and optimization, are more interested in congestion rather than free flow. Most of the State-Of-The-Art (SOTA) models unfortunately do not differentiate between the traffic states during training and evaluation. To this end, we first comprehensively study the performance of the SOTA models under different speed regimes to illustrate the low accuracy of low-speed prediction. We further propose and design a novel Congestion-Aware Sparse Attention transformer (CASAformer) to enhance the prediction performance under low-speed traffic conditions. Specifically, the CASA layer emphasizes the congestion data and reduces the impact of free-flow data. Moreover, we adopt a new congestion adaptive loss function for training to make the model learn more from the congestion data. Extensive experiments on real-world datasets show that our CASAformer outperforms the SOTA models for predicting speed under 40 mph in all prediction horizons.</div></div>","PeriodicalId":100292,"journal":{"name":"Communications in Transportation Research","volume":"5 ","pages":"Article 100174"},"PeriodicalIF":12.5000,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CASAformer: Congestion-aware sparse attention transformer for traffic speed prediction\",\"authors\":\"Yifan Zhang ,&nbsp;Qishen Zhou ,&nbsp;Jianping Wang ,&nbsp;Anastasios Kouvelas ,&nbsp;Michail A. Makridis\",\"doi\":\"10.1016/j.commtr.2025.100174\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Accurate and efficient traffic speed prediction is crucial for improving roaDongguand safety and efficiency. With the emerging deep learning and extensive traffic data, data-driven methods are widely adopted to achieve this task with increasingly complicated structures and progressively deeper layers of neural networks. Despite the design of the models, they aim to optimize the overall average performance without discriminating against different traffic states. However, the fact is that predicting the traffic speed under congestion is normally more important than under free flow since the downstream tasks, such as traffic control and optimization, are more interested in congestion rather than free flow. Most of the State-Of-The-Art (SOTA) models unfortunately do not differentiate between the traffic states during training and evaluation. To this end, we first comprehensively study the performance of the SOTA models under different speed regimes to illustrate the low accuracy of low-speed prediction. We further propose and design a novel Congestion-Aware Sparse Attention transformer (CASAformer) to enhance the prediction performance under low-speed traffic conditions. Specifically, the CASA layer emphasizes the congestion data and reduces the impact of free-flow data. Moreover, we adopt a new congestion adaptive loss function for training to make the model learn more from the congestion data. Extensive experiments on real-world datasets show that our CASAformer outperforms the SOTA models for predicting speed under 40 mph in all prediction horizons.</div></div>\",\"PeriodicalId\":100292,\"journal\":{\"name\":\"Communications in Transportation Research\",\"volume\":\"5 \",\"pages\":\"Article 100174\"},\"PeriodicalIF\":12.5000,\"publicationDate\":\"2025-04-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Communications in Transportation Research\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2772424725000149\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"TRANSPORTATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Communications in Transportation Research","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772424725000149","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TRANSPORTATION","Score":null,"Total":0}
引用次数: 0

摘要

准确、高效的交通速度预测对于提高道路通行和安全效率至关重要。随着深度学习的兴起和交通数据的丰富,数据驱动的方法被广泛采用,神经网络的结构越来越复杂,层次越来越深。尽管模型的设计,他们的目标是优化整体平均性能,而不区分不同的交通状态。然而,事实是,在拥塞情况下预测交通速度通常比在自由流情况下更重要,因为下游任务,如交通控制和优化,对拥塞而不是自由流更感兴趣。不幸的是,大多数最先进的(SOTA)模型在训练和评估期间不能区分交通状态。为此,我们首先综合研究了SOTA模型在不同速度范围下的性能,以说明低速预测的低精度。我们进一步提出并设计了一种新的拥塞感知稀疏注意变压器(CASAformer),以提高低速交通条件下的预测性能。具体来说,CASA层强调拥塞数据,减少自由流数据的影响。此外,我们采用新的拥塞自适应损失函数进行训练,使模型从拥塞数据中学习更多。在实际数据集上进行的大量实验表明,在所有预测范围内,CASAformer在预测40英里/小时以下的速度方面都优于SOTA模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
CASAformer: Congestion-aware sparse attention transformer for traffic speed prediction
Accurate and efficient traffic speed prediction is crucial for improving roaDongguand safety and efficiency. With the emerging deep learning and extensive traffic data, data-driven methods are widely adopted to achieve this task with increasingly complicated structures and progressively deeper layers of neural networks. Despite the design of the models, they aim to optimize the overall average performance without discriminating against different traffic states. However, the fact is that predicting the traffic speed under congestion is normally more important than under free flow since the downstream tasks, such as traffic control and optimization, are more interested in congestion rather than free flow. Most of the State-Of-The-Art (SOTA) models unfortunately do not differentiate between the traffic states during training and evaluation. To this end, we first comprehensively study the performance of the SOTA models under different speed regimes to illustrate the low accuracy of low-speed prediction. We further propose and design a novel Congestion-Aware Sparse Attention transformer (CASAformer) to enhance the prediction performance under low-speed traffic conditions. Specifically, the CASA layer emphasizes the congestion data and reduces the impact of free-flow data. Moreover, we adopt a new congestion adaptive loss function for training to make the model learn more from the congestion data. Extensive experiments on real-world datasets show that our CASAformer outperforms the SOTA models for predicting speed under 40 mph in all prediction horizons.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
15.20
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信