Wenfeng Zhou , Guojiang Shen , Zhenzhen Zhao , Zhaolin Deng , Tao Tang , Xiangjie Kong , Amr Tolba , Osama Alfarraj
{"title":"A transformer-based approach for traffic prediction with fusion spatiotemporal attention","authors":"Wenfeng Zhou , Guojiang Shen , Zhenzhen Zhao , Zhaolin Deng , Tao Tang , Xiangjie Kong , Amr Tolba , Osama Alfarraj","doi":"10.1016/j.knosys.2025.114466","DOIUrl":null,"url":null,"abstract":"<div><div>Accurate traffic data prediction is a crucial technology for data-driven intelligent transportation systems. This has an important impact on optimizing urban traffic management, travel efficiency, traffic experience, etc. Traffic flow prediction tasks primarily focus on mining dynamic spatiotemporal dependencies. Most existing Transformer-based methods and GNN-based methods have limitations in mining local-global spatiotemporal dependencies. To address this issue, we propose a novel traffic data prediction model called LGSTformer that can perceive local-global spatiotemporal dependencies. First, we construct an embedding layer that provides multiple types of embedding representations for the model by projecting spatiotemporal data and temporal and spatial information into different embeddings. Next, we design two modules to capture local-global temporal and spatial dependencies based on the naive spatiotemporal self-attention mechanism: the local-global temporal module and the local-global spatial module. The former incorporates multi-scale temporal convolutions to capture short-term temporal dependencies, and the latter incorporates dynamic-static graph convolutions to capture local spatial dependencies. Finally, to achieve effective fusion of local-global dependency information, a dual-path adaptive gated fusion layer based on a gating mechanism is introduced to attain adaptive fusion of information at different levels. Experimental results on four public real-world traffic datasets show that LGSTformer outperforms existing methods and has potential as an advanced solution for traffic flow prediction.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"329 ","pages":"Article 114466"},"PeriodicalIF":7.6000,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705125015059","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Accurate traffic data prediction is a crucial technology for data-driven intelligent transportation systems. This has an important impact on optimizing urban traffic management, travel efficiency, traffic experience, etc. Traffic flow prediction tasks primarily focus on mining dynamic spatiotemporal dependencies. Most existing Transformer-based methods and GNN-based methods have limitations in mining local-global spatiotemporal dependencies. To address this issue, we propose a novel traffic data prediction model called LGSTformer that can perceive local-global spatiotemporal dependencies. First, we construct an embedding layer that provides multiple types of embedding representations for the model by projecting spatiotemporal data and temporal and spatial information into different embeddings. Next, we design two modules to capture local-global temporal and spatial dependencies based on the naive spatiotemporal self-attention mechanism: the local-global temporal module and the local-global spatial module. The former incorporates multi-scale temporal convolutions to capture short-term temporal dependencies, and the latter incorporates dynamic-static graph convolutions to capture local spatial dependencies. Finally, to achieve effective fusion of local-global dependency information, a dual-path adaptive gated fusion layer based on a gating mechanism is introduced to attain adaptive fusion of information at different levels. Experimental results on four public real-world traffic datasets show that LGSTformer outperforms existing methods and has potential as an advanced solution for traffic flow prediction.
期刊介绍:
Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.