Wai Keung Wong;Lunke Fei;Jianyang Qin;Shuping Zhao;Jie Wen;Zhihao He
{"title":"大规模跨模态检索的异构对语义增强哈希","authors":"Wai Keung Wong;Lunke Fei;Jianyang Qin;Shuping Zhao;Jie Wen;Zhihao He","doi":"10.1109/TMM.2025.3535401","DOIUrl":null,"url":null,"abstract":"Cross-modal hash learning has drawn widespread attention for large-scale multimodal retrieval because of its stability and efficiency in approximate similarity searches. However, most existing cross-modal hashing approaches employ discrete label-guided information to coarsely reflect intra- and intermodality correlations, making them less effective to measuring the semantic similarity of data with multiple modalities. In this paper, we propose a new heterogeneous pairwise-semantic enhancement hashing (HPsEH) for large-scale cross-modal retrieval by distilling higher-level pairwise-semantic similarity from supervision information. First, we adopt a supervised self-expression to learn a data-specific quantified semantic matrix, which uses real values to measure both the similarity and dissimilarity ranks of paired instances, such that the intrinsic semantics of the data can be well captured. Then, we fuse the label-based information and quantified semantic similarity to collaboratively learn the hash codes of multimodal data, such that both the intermodality consistency and modality-specific features can be simultaneously obtained during hash code learning. Moreover, we employ effective iterative optimization to address the discrete binary solution and massive pairwise matrix calculation, making the HPsEH scalable to large-scale datasets. Extensive experimental results on three widely used datasets demonstrate the superiority of our proposed HPsEH method over most state-of-the art approaches.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"3238-3250"},"PeriodicalIF":9.7000,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Heterogeneous Pairwise-Semantic Enhancement Hashing for Large-Scale Cross-Modal Retrieval\",\"authors\":\"Wai Keung Wong;Lunke Fei;Jianyang Qin;Shuping Zhao;Jie Wen;Zhihao He\",\"doi\":\"10.1109/TMM.2025.3535401\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Cross-modal hash learning has drawn widespread attention for large-scale multimodal retrieval because of its stability and efficiency in approximate similarity searches. However, most existing cross-modal hashing approaches employ discrete label-guided information to coarsely reflect intra- and intermodality correlations, making them less effective to measuring the semantic similarity of data with multiple modalities. In this paper, we propose a new heterogeneous pairwise-semantic enhancement hashing (HPsEH) for large-scale cross-modal retrieval by distilling higher-level pairwise-semantic similarity from supervision information. First, we adopt a supervised self-expression to learn a data-specific quantified semantic matrix, which uses real values to measure both the similarity and dissimilarity ranks of paired instances, such that the intrinsic semantics of the data can be well captured. Then, we fuse the label-based information and quantified semantic similarity to collaboratively learn the hash codes of multimodal data, such that both the intermodality consistency and modality-specific features can be simultaneously obtained during hash code learning. Moreover, we employ effective iterative optimization to address the discrete binary solution and massive pairwise matrix calculation, making the HPsEH scalable to large-scale datasets. Extensive experimental results on three widely used datasets demonstrate the superiority of our proposed HPsEH method over most state-of-the art approaches.\",\"PeriodicalId\":13273,\"journal\":{\"name\":\"IEEE Transactions on Multimedia\",\"volume\":\"27 \",\"pages\":\"3238-3250\"},\"PeriodicalIF\":9.7000,\"publicationDate\":\"2025-01-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Multimedia\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10855454/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Multimedia","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10855454/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Heterogeneous Pairwise-Semantic Enhancement Hashing for Large-Scale Cross-Modal Retrieval
Cross-modal hash learning has drawn widespread attention for large-scale multimodal retrieval because of its stability and efficiency in approximate similarity searches. However, most existing cross-modal hashing approaches employ discrete label-guided information to coarsely reflect intra- and intermodality correlations, making them less effective to measuring the semantic similarity of data with multiple modalities. In this paper, we propose a new heterogeneous pairwise-semantic enhancement hashing (HPsEH) for large-scale cross-modal retrieval by distilling higher-level pairwise-semantic similarity from supervision information. First, we adopt a supervised self-expression to learn a data-specific quantified semantic matrix, which uses real values to measure both the similarity and dissimilarity ranks of paired instances, such that the intrinsic semantics of the data can be well captured. Then, we fuse the label-based information and quantified semantic similarity to collaboratively learn the hash codes of multimodal data, such that both the intermodality consistency and modality-specific features can be simultaneously obtained during hash code learning. Moreover, we employ effective iterative optimization to address the discrete binary solution and massive pairwise matrix calculation, making the HPsEH scalable to large-scale datasets. Extensive experimental results on three widely used datasets demonstrate the superiority of our proposed HPsEH method over most state-of-the art approaches.
期刊介绍:
The IEEE Transactions on Multimedia delves into diverse aspects of multimedia technology and applications, covering circuits, networking, signal processing, systems, software, and systems integration. The scope aligns with the Fields of Interest of the sponsors, ensuring a comprehensive exploration of research in multimedia.