MDEformer: A spatio-temporal decoupling transformer with the multidimensional information encoding for certain traffic flow prediction

IF 6.8 2区 工程技术 Q1 ENGINEERING, MULTIDISCIPLINARY
Yudong Lu , Tao Cui , Di Dong , Chongguang Ren , Zhijian Qu , Xianwei Zhang
{"title":"MDEformer: A spatio-temporal decoupling transformer with the multidimensional information encoding for certain traffic flow prediction","authors":"Yudong Lu ,&nbsp;Tao Cui ,&nbsp;Di Dong ,&nbsp;Chongguang Ren ,&nbsp;Zhijian Qu ,&nbsp;Xianwei Zhang","doi":"10.1016/j.aej.2025.09.037","DOIUrl":null,"url":null,"abstract":"<div><div>With the rapid advancement of Intelligent Transportation Systems (ITS), accurate traffic flow prediction has become a critical challenge. Although existing deep learning models are capable of capturing the spatio-temporal dependencies in traffic data to some extent, they still face limitations in modeling spatio-temporal features, long-term and short-term temporal dependencies, and the dynamic-static spatial relationships at local and global scales. This paper proposes a Spatio-Temporal Decoupling Transformer with Multidimensional Information Encoding (MDEformer) to address these issues. MDEformer effectively captures spatio-temporal features through the integration of multidimensional information encoding. Moreover, the model adopts a decoupled design of temporal and spatial encoder layers. In the temporal encoder layer, a GRU is employed to replace the linear mapping in the multi-head self-attention mechanism, thereby facilitating improved capture of short-term fluctuations and long-term trends. In the spatial encoder layer, we integrate the graph fusion module with Chebyshev graph convolution to replace the conventional mapping in the multi-head self-attention mechanism, thereby enhancing the capability to model local and global dynamic-static spatial dependencies. Experiments on four real-world traffic datasets demonstrate that MDEformer significantly outperforms other baseline methods in terms of prediction accuracy.</div></div>","PeriodicalId":7484,"journal":{"name":"alexandria engineering journal","volume":"130 ","pages":"Pages 403-419"},"PeriodicalIF":6.8000,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"alexandria engineering journal","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1110016825010002","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

With the rapid advancement of Intelligent Transportation Systems (ITS), accurate traffic flow prediction has become a critical challenge. Although existing deep learning models are capable of capturing the spatio-temporal dependencies in traffic data to some extent, they still face limitations in modeling spatio-temporal features, long-term and short-term temporal dependencies, and the dynamic-static spatial relationships at local and global scales. This paper proposes a Spatio-Temporal Decoupling Transformer with Multidimensional Information Encoding (MDEformer) to address these issues. MDEformer effectively captures spatio-temporal features through the integration of multidimensional information encoding. Moreover, the model adopts a decoupled design of temporal and spatial encoder layers. In the temporal encoder layer, a GRU is employed to replace the linear mapping in the multi-head self-attention mechanism, thereby facilitating improved capture of short-term fluctuations and long-term trends. In the spatial encoder layer, we integrate the graph fusion module with Chebyshev graph convolution to replace the conventional mapping in the multi-head self-attention mechanism, thereby enhancing the capability to model local and global dynamic-static spatial dependencies. Experiments on four real-world traffic datasets demonstrate that MDEformer significantly outperforms other baseline methods in terms of prediction accuracy.
MDEformer:一个具有多维信息编码的时空解耦转换器,用于特定的交通流预测
随着智能交通系统(ITS)的快速发展,准确的交通流预测已经成为一个关键的挑战。虽然现有的深度学习模型能够在一定程度上捕获交通数据中的时空依赖关系,但在建模时空特征、长期和短期时间依赖关系以及局部和全局尺度的动态-静态空间关系方面仍然存在局限性。本文提出了一种具有多维信息编码的时空解耦变压器(MDEformer)来解决这些问题。MDEformer通过集成多维信息编码,有效地捕获了时空特征。此外,该模型采用了时空编码器层的解耦设计。在时间编码器层,采用GRU代替多头自注意机制中的线性映射,从而更好地捕捉短期波动和长期趋势。在空间编码器层,我们将图融合模块与切比雪夫图卷积集成在一起,取代了多头自关注机制中的传统映射,从而增强了对局部和全局动态-静态空间依赖关系的建模能力。在四个真实交通数据集上的实验表明,MDEformer在预测精度方面明显优于其他基线方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
alexandria engineering journal
alexandria engineering journal Engineering-General Engineering
CiteScore
11.20
自引率
4.40%
发文量
1015
审稿时长
43 days
期刊介绍: Alexandria Engineering Journal is an international journal devoted to publishing high quality papers in the field of engineering and applied science. Alexandria Engineering Journal is cited in the Engineering Information Services (EIS) and the Chemical Abstracts (CA). The papers published in Alexandria Engineering Journal are grouped into five sections, according to the following classification: • Mechanical, Production, Marine and Textile Engineering • Electrical Engineering, Computer Science and Nuclear Engineering • Civil and Architecture Engineering • Chemical Engineering and Applied Sciences • Environmental Engineering
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信