Chuan-Lin Gan, Rui-Sheng Jia, Hong-Mei Sun, Yuan-Chao Song
{"title":"Multi-modal mamba framework for RGB-T crowd counting with linear complexity","authors":"Chuan-Lin Gan, Rui-Sheng Jia, Hong-Mei Sun, Yuan-Chao Song","doi":"10.1016/j.patcog.2025.112522","DOIUrl":null,"url":null,"abstract":"<div><div>Existing RGB-T crowd counting methods enhance counting accuracy by integrating RGB images with thermal imaging features. However, attention-based fusion methods have a computational complexity of <span><math><mrow><mi>O</mi><mo>(</mo><msup><mi>N</mi><mn>2</mn></msup><mo>)</mo></mrow></math></span>, which significantly increases computational costs. Moreover, current approaches fail to sufficiently retain the detailed information of the original modalities during feature fusion, leading to the loss of critical information. To address these issues, this paper proposes a cross-modal fusion network based on Mamba, named VMMNet. Specifically, a Dynamic State Space (DSS) block is designed using the selective scan mechanism, reducing the computational complexity of attention mechanisms from <span><math><mrow><mi>O</mi><mo>(</mo><msup><mi>N</mi><mn>2</mn></msup><mo>)</mo></mrow></math></span> to linear, thereby significantly improving network efficiency and inference speed. Furthermore, to tackle the issue of information loss during multimodal feature fusion, two innovative modules, the Cross-Mamba Enhancement Block (CMEB) and the Merge-Mamba Fusion Block (MMFB), are introduced. The CMEB enhances inter-modal information interaction through a cross-selective scan mechanism, while the MMFB further integrates the features output by CMEB to ensure information integrity. Finally, a Channel Aware Mamba Decoder (CMD) is designed to enhance the network’s modeling capability in the channel dimension. On existing RGB-T crowd counting datasets, VMMNet reduces FLOPs by 94.3 % compared to the state-of-the-art methods and achieves performance improvements of 18.7 % and 23.3 % in GAME(0) and RMSE, respectively.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"172 ","pages":"Article 112522"},"PeriodicalIF":7.6000,"publicationDate":"2025-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320325011859","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Existing RGB-T crowd counting methods enhance counting accuracy by integrating RGB images with thermal imaging features. However, attention-based fusion methods have a computational complexity of , which significantly increases computational costs. Moreover, current approaches fail to sufficiently retain the detailed information of the original modalities during feature fusion, leading to the loss of critical information. To address these issues, this paper proposes a cross-modal fusion network based on Mamba, named VMMNet. Specifically, a Dynamic State Space (DSS) block is designed using the selective scan mechanism, reducing the computational complexity of attention mechanisms from to linear, thereby significantly improving network efficiency and inference speed. Furthermore, to tackle the issue of information loss during multimodal feature fusion, two innovative modules, the Cross-Mamba Enhancement Block (CMEB) and the Merge-Mamba Fusion Block (MMFB), are introduced. The CMEB enhances inter-modal information interaction through a cross-selective scan mechanism, while the MMFB further integrates the features output by CMEB to ensure information integrity. Finally, a Channel Aware Mamba Decoder (CMD) is designed to enhance the network’s modeling capability in the channel dimension. On existing RGB-T crowd counting datasets, VMMNet reduces FLOPs by 94.3 % compared to the state-of-the-art methods and achieves performance improvements of 18.7 % and 23.3 % in GAME(0) and RMSE, respectively.
期刊介绍:
The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.