{"title":"Eliminating Non-Overlapping Semantic Misalignment for Cross-Modal Medical Retrieval","authors":"Zeqiang Wei;Zeyi Hou;Xiuzhuang Zhou","doi":"10.1109/LSP.2025.3602392","DOIUrl":null,"url":null,"abstract":"In recent years, increasing research has shown that fine-grained local alignment is crucial for the cross-modal medical image-report retrieval task. However, existing local alignment learning methods suffer from the misalignment of semantically non-overlapping features between different modalities, which in turn negatively affects the retrieval performance. To address this challenge, we propose a Global-Feature Guided Cross-modal Local Alignment (GFG-CMLA) method. Unlike prior methods that rely on explicit local attention or learned weighting mechanisms, our approach leverages global semantic features extracted from the cross-modal common semantic space to implicitly guide local alignment, adaptively focusing on semantically overlapping content while filtering out irrelevant local regions, thus mitigating misalignment interference without additional annotations or architectural complexity. We validated the effectiveness of the proposed method through ablation experiments on the MIMIC-CXR and CheXpert Plus dataset. Furthermore, comparisons with state-of-the-art local alignment methods indicate that our approach achieves superior cross-modal retrieval performance.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3510-3514"},"PeriodicalIF":3.9000,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/11141016/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
In recent years, increasing research has shown that fine-grained local alignment is crucial for the cross-modal medical image-report retrieval task. However, existing local alignment learning methods suffer from the misalignment of semantically non-overlapping features between different modalities, which in turn negatively affects the retrieval performance. To address this challenge, we propose a Global-Feature Guided Cross-modal Local Alignment (GFG-CMLA) method. Unlike prior methods that rely on explicit local attention or learned weighting mechanisms, our approach leverages global semantic features extracted from the cross-modal common semantic space to implicitly guide local alignment, adaptively focusing on semantically overlapping content while filtering out irrelevant local regions, thus mitigating misalignment interference without additional annotations or architectural complexity. We validated the effectiveness of the proposed method through ablation experiments on the MIMIC-CXR and CheXpert Plus dataset. Furthermore, comparisons with state-of-the-art local alignment methods indicate that our approach achieves superior cross-modal retrieval performance.
期刊介绍:
The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.