{"title":"基于变压器的无监督跨模态哈希法反演与遥感反演","authors":"Weikang Gao;Zifan Liu;Yuan Cao;Zuojin Huang;Yaru Gao","doi":"10.1109/LSP.2025.3602637","DOIUrl":null,"url":null,"abstract":"With the rapid expansion of online information, cross-modal retrieval has emerged as a crucial and dynamic research focus. Deep hashing has gained significant traction in this field due to its efficiency in storage and retrieval speed, making it particularly valuable for remote sensing multi-modal retrieval. However, existing deep cross-modal hashing techniques often rely on parallel network structures for processing different modalities, overlooking a unified representation that captures cross-modal visual information. To address this limitation, we introduce a novel unsupervised cross-modal hashing framework that incorporates two modality-specific encoders and a fusion module. This fusion module facilitates modality interaction, enabling the extraction of meaningful semantic relationships across different data types. To ensure comprehensive similarity preservation, we design an integrated objective function that incorporates inter-modal and intra-modal constraints, joint consistency, and binary alignment losses. Furthermore, instead of conventional convolutional networks, we adopt the Swin Transformer as the backbone to enhance the discriminative power of image features. Our approach achieves an average 2.3% improvement in mAP on remote sensing cross-modal retrieval tasks compared to existing methods. The implementation is available at <uri>https://github.com/caoyuan57/TUCH</uri>.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3540-3544"},"PeriodicalIF":3.9000,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Transformer Based Unsupervised Cross-Modal Hashing for Normal and Remote Sensing Retrieval\",\"authors\":\"Weikang Gao;Zifan Liu;Yuan Cao;Zuojin Huang;Yaru Gao\",\"doi\":\"10.1109/LSP.2025.3602637\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the rapid expansion of online information, cross-modal retrieval has emerged as a crucial and dynamic research focus. Deep hashing has gained significant traction in this field due to its efficiency in storage and retrieval speed, making it particularly valuable for remote sensing multi-modal retrieval. However, existing deep cross-modal hashing techniques often rely on parallel network structures for processing different modalities, overlooking a unified representation that captures cross-modal visual information. To address this limitation, we introduce a novel unsupervised cross-modal hashing framework that incorporates two modality-specific encoders and a fusion module. This fusion module facilitates modality interaction, enabling the extraction of meaningful semantic relationships across different data types. To ensure comprehensive similarity preservation, we design an integrated objective function that incorporates inter-modal and intra-modal constraints, joint consistency, and binary alignment losses. Furthermore, instead of conventional convolutional networks, we adopt the Swin Transformer as the backbone to enhance the discriminative power of image features. Our approach achieves an average 2.3% improvement in mAP on remote sensing cross-modal retrieval tasks compared to existing methods. The implementation is available at <uri>https://github.com/caoyuan57/TUCH</uri>.\",\"PeriodicalId\":13154,\"journal\":{\"name\":\"IEEE Signal Processing Letters\",\"volume\":\"32 \",\"pages\":\"3540-3544\"},\"PeriodicalIF\":3.9000,\"publicationDate\":\"2025-08-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Signal Processing Letters\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11141368/\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/11141368/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Transformer Based Unsupervised Cross-Modal Hashing for Normal and Remote Sensing Retrieval
With the rapid expansion of online information, cross-modal retrieval has emerged as a crucial and dynamic research focus. Deep hashing has gained significant traction in this field due to its efficiency in storage and retrieval speed, making it particularly valuable for remote sensing multi-modal retrieval. However, existing deep cross-modal hashing techniques often rely on parallel network structures for processing different modalities, overlooking a unified representation that captures cross-modal visual information. To address this limitation, we introduce a novel unsupervised cross-modal hashing framework that incorporates two modality-specific encoders and a fusion module. This fusion module facilitates modality interaction, enabling the extraction of meaningful semantic relationships across different data types. To ensure comprehensive similarity preservation, we design an integrated objective function that incorporates inter-modal and intra-modal constraints, joint consistency, and binary alignment losses. Furthermore, instead of conventional convolutional networks, we adopt the Swin Transformer as the backbone to enhance the discriminative power of image features. Our approach achieves an average 2.3% improvement in mAP on remote sensing cross-modal retrieval tasks compared to existing methods. The implementation is available at https://github.com/caoyuan57/TUCH.
期刊介绍:
The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.