Yang Yang, Jinyi Guo, Guangyu Li, Lanyu Li, Wenjie Li, Jian Yang
{"title":"Alignment efficient image-sentence retrieval considering transferable cross-modal representation learning","authors":"Yang Yang, Jinyi Guo, Guangyu Li, Lanyu Li, Wenjie Li, Jian Yang","doi":"10.1007/s11704-023-3186-6","DOIUrl":null,"url":null,"abstract":"<p>Traditional image-sentence cross-modal retrieval methods usually aim to learn consistent representations of heterogeneous modalities, thereby to search similar instances in one modality according to the query from another modality in result. The basic assumption behind these methods is that parallel multi-modal data (i.e., different modalities of the same example are aligned) can be obtained in prior. In other words, the image-sentence cross-modal retrieval task is a supervised task with the alignments as ground-truths. However, in many real-world applications, it is difficult to realign a large amount of parallel data for new scenarios due to the substantial labor costs, leading the non-parallel multi-modal data and existing methods cannot be used directly. On the other hand, there actually exists auxiliary parallel multi-modal data with similar semantics, which can assist the non-parallel data to learn the consistent representations. Therefore, in this paper, we aim at “Alignment Efficient Image-Sentence Retrieval” (AEIR), which recurs to the auxiliary parallel image-sentence data as the source domain data, and takes the non-parallel data as the target domain data. Unlike single-modal transfer learning, AEIR learns consistent image-sentence cross-modal representations of target domain by transferring the alignments of existing parallel data. Specifically, AEIR learns the image-sentence consistent representations in source domain with parallel data, while transferring the alignment knowledge across domains by jointly optimizing a novel designed cross-domain cross-modal metric learning based constraint with intra-modal domain adversarial loss. Consequently, we can effectively learn the consistent representations for target domain considering both the structure and semantic transfer. Furthermore, extensive experiments on different transfer scenarios validate that AEIR can achieve better retrieval results comparing with the baselines.</p>","PeriodicalId":12640,"journal":{"name":"Frontiers of Computer Science","volume":"21 1","pages":""},"PeriodicalIF":3.4000,"publicationDate":"2023-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers of Computer Science","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11704-023-3186-6","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Traditional image-sentence cross-modal retrieval methods usually aim to learn consistent representations of heterogeneous modalities, thereby to search similar instances in one modality according to the query from another modality in result. The basic assumption behind these methods is that parallel multi-modal data (i.e., different modalities of the same example are aligned) can be obtained in prior. In other words, the image-sentence cross-modal retrieval task is a supervised task with the alignments as ground-truths. However, in many real-world applications, it is difficult to realign a large amount of parallel data for new scenarios due to the substantial labor costs, leading the non-parallel multi-modal data and existing methods cannot be used directly. On the other hand, there actually exists auxiliary parallel multi-modal data with similar semantics, which can assist the non-parallel data to learn the consistent representations. Therefore, in this paper, we aim at “Alignment Efficient Image-Sentence Retrieval” (AEIR), which recurs to the auxiliary parallel image-sentence data as the source domain data, and takes the non-parallel data as the target domain data. Unlike single-modal transfer learning, AEIR learns consistent image-sentence cross-modal representations of target domain by transferring the alignments of existing parallel data. Specifically, AEIR learns the image-sentence consistent representations in source domain with parallel data, while transferring the alignment knowledge across domains by jointly optimizing a novel designed cross-domain cross-modal metric learning based constraint with intra-modal domain adversarial loss. Consequently, we can effectively learn the consistent representations for target domain considering both the structure and semantic transfer. Furthermore, extensive experiments on different transfer scenarios validate that AEIR can achieve better retrieval results comparing with the baselines.
期刊介绍:
Frontiers of Computer Science aims to provide a forum for the publication of peer-reviewed papers to promote rapid communication and exchange between computer scientists. The journal publishes research papers and review articles in a wide range of topics, including: architecture, software, artificial intelligence, theoretical computer science, networks and communication, information systems, multimedia and graphics, information security, interdisciplinary, etc. The journal especially encourages papers from new emerging and multidisciplinary areas, as well as papers reflecting the international trends of research and development and on special topics reporting progress made by Chinese computer scientists.