{"title":"MBDBFormer: a multimodal bridge dual branch Transformer for person re-identification","authors":"Xiangyu Deng, Jing Ding","doi":"10.1016/j.dsp.2025.105481","DOIUrl":null,"url":null,"abstract":"<div><div>A key challenge in the person re-identification (ReID) task is the extraction of robust and discriminative pedestrian features. However, the sensitivity of RGB images in complex scenes to light, viewing angle differences and occlusion seriously affect the stability of feature extraction. To address the above problems, we propose a Multimodal Bridge Dual Branch Transformer (MBDBFormer) by combining CNN and Transformer. First, we use the luminance component in RGB and IHS with its frequency domain transformed low and high frequency components (I) as multimodal inputs for image preprocessing, so that the network takes into account both light adaptation and color information. Second, to effectively fuse the feature advantages of the two modalities, the preprocessed image information is input into a bridge branch network consisting of a multilayer down sampling network, and outputs one global and four local feature information through Transformer coding. Finally, using the dynamic allocation of attention weights, focusing on strengthening the feature expression of discriminative regions such as edges and textures, we designed the Gated Dynamic Attention and Feature Interaction Mechanism (GDFM), which establishes the long-range dependency between RGB and I feature, and achieves the complementary optimization of the two modal features. It enables the output fusion features to retain the rich color information of the RGB modality while incorporating the illumination robustness of the I modality. A number of experimental results show that our proposed method is better than the state-of-the-art method on the Market1501, DukeMTMC, MSMT17 generalized dataset and the Occluded-Duke occlusion dataset, which verifies the effectiveness of our method on the task of person re-identification.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"168 ","pages":"Article 105481"},"PeriodicalIF":2.9000,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1051200425005032","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
A key challenge in the person re-identification (ReID) task is the extraction of robust and discriminative pedestrian features. However, the sensitivity of RGB images in complex scenes to light, viewing angle differences and occlusion seriously affect the stability of feature extraction. To address the above problems, we propose a Multimodal Bridge Dual Branch Transformer (MBDBFormer) by combining CNN and Transformer. First, we use the luminance component in RGB and IHS with its frequency domain transformed low and high frequency components (I) as multimodal inputs for image preprocessing, so that the network takes into account both light adaptation and color information. Second, to effectively fuse the feature advantages of the two modalities, the preprocessed image information is input into a bridge branch network consisting of a multilayer down sampling network, and outputs one global and four local feature information through Transformer coding. Finally, using the dynamic allocation of attention weights, focusing on strengthening the feature expression of discriminative regions such as edges and textures, we designed the Gated Dynamic Attention and Feature Interaction Mechanism (GDFM), which establishes the long-range dependency between RGB and I feature, and achieves the complementary optimization of the two modal features. It enables the output fusion features to retain the rich color information of the RGB modality while incorporating the illumination robustness of the I modality. A number of experimental results show that our proposed method is better than the state-of-the-art method on the Market1501, DukeMTMC, MSMT17 generalized dataset and the Occluded-Duke occlusion dataset, which verifies the effectiveness of our method on the task of person re-identification.
期刊介绍:
Digital Signal Processing: A Review Journal is one of the oldest and most established journals in the field of signal processing yet it aims to be the most innovative. The Journal invites top quality research articles at the frontiers of research in all aspects of signal processing. Our objective is to provide a platform for the publication of ground-breaking research in signal processing with both academic and industrial appeal.
The journal has a special emphasis on statistical signal processing methodology such as Bayesian signal processing, and encourages articles on emerging applications of signal processing such as:
• big data• machine learning• internet of things• information security• systems biology and computational biology,• financial time series analysis,• autonomous vehicles,• quantum computing,• neuromorphic engineering,• human-computer interaction and intelligent user interfaces,• environmental signal processing,• geophysical signal processing including seismic signal processing,• chemioinformatics and bioinformatics,• audio, visual and performance arts,• disaster management and prevention,• renewable energy,