{"title":"零射击跨模态检索的相关特征综合与对齐","authors":"Xing Xu, Kaiyi Lin, Huimin Lu, Lianli Gao, Heng Tao Shen","doi":"10.1145/3397271.3401149","DOIUrl":null,"url":null,"abstract":"The goal of cross-modal retrieval is to search for semantically similar instances in one modality by using a query from another modality. Existing approaches mainly consider the standard scenario that requires the source set for training and the target set for testing share the same scope of classes. However, they may not generalize well on zero-shot cross-modal retrieval (ZS-CMR) task, where the target set contains unseen classes that are disjoint with the seen classes in the source set. This task is more challenging due to 1) the absence of the unseen classes during training, 2) inconsistent semantics across seen and unseen classes, and 3) the heterogeneous multimodal distributions between the source and target set. To address these issues, we propose a novel Correlated Feature Synthesis and Alignment (CFSA) approach to integrate multimodal feature synthesis, common space learning and knowledge transfer for ZS-CMR. Our CFSA first utilizes class-level word embeddings to guide two coupled Wassertein generative adversarial networks (WGANs) to synthesize sufficient multimodal features with semantic correlation for stable training. Then the synthetic and true multimodal features are jointly mapped to a common semantic space via an effective distribution alignment scheme, where the cross-modal correlations of different semantic features are captured and the knowledge can be transferred to the unseen classes under the cycle-consistency constraint. Experiments on four benchmark datasets for image-text retrieval and two large-scale datasets for image-sketch retrieval show the remarkable improvements achieved by our CFAS method comparing with a bundle of state-of-the-art approaches.","PeriodicalId":252050,"journal":{"name":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":"{\"title\":\"Correlated Features Synthesis and Alignment for Zero-shot Cross-modal Retrieval\",\"authors\":\"Xing Xu, Kaiyi Lin, Huimin Lu, Lianli Gao, Heng Tao Shen\",\"doi\":\"10.1145/3397271.3401149\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The goal of cross-modal retrieval is to search for semantically similar instances in one modality by using a query from another modality. Existing approaches mainly consider the standard scenario that requires the source set for training and the target set for testing share the same scope of classes. However, they may not generalize well on zero-shot cross-modal retrieval (ZS-CMR) task, where the target set contains unseen classes that are disjoint with the seen classes in the source set. This task is more challenging due to 1) the absence of the unseen classes during training, 2) inconsistent semantics across seen and unseen classes, and 3) the heterogeneous multimodal distributions between the source and target set. To address these issues, we propose a novel Correlated Feature Synthesis and Alignment (CFSA) approach to integrate multimodal feature synthesis, common space learning and knowledge transfer for ZS-CMR. Our CFSA first utilizes class-level word embeddings to guide two coupled Wassertein generative adversarial networks (WGANs) to synthesize sufficient multimodal features with semantic correlation for stable training. Then the synthetic and true multimodal features are jointly mapped to a common semantic space via an effective distribution alignment scheme, where the cross-modal correlations of different semantic features are captured and the knowledge can be transferred to the unseen classes under the cycle-consistency constraint. Experiments on four benchmark datasets for image-text retrieval and two large-scale datasets for image-sketch retrieval show the remarkable improvements achieved by our CFAS method comparing with a bundle of state-of-the-art approaches.\",\"PeriodicalId\":252050,\"journal\":{\"name\":\"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-07-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"15\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3397271.3401149\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3397271.3401149","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Correlated Features Synthesis and Alignment for Zero-shot Cross-modal Retrieval
The goal of cross-modal retrieval is to search for semantically similar instances in one modality by using a query from another modality. Existing approaches mainly consider the standard scenario that requires the source set for training and the target set for testing share the same scope of classes. However, they may not generalize well on zero-shot cross-modal retrieval (ZS-CMR) task, where the target set contains unseen classes that are disjoint with the seen classes in the source set. This task is more challenging due to 1) the absence of the unseen classes during training, 2) inconsistent semantics across seen and unseen classes, and 3) the heterogeneous multimodal distributions between the source and target set. To address these issues, we propose a novel Correlated Feature Synthesis and Alignment (CFSA) approach to integrate multimodal feature synthesis, common space learning and knowledge transfer for ZS-CMR. Our CFSA first utilizes class-level word embeddings to guide two coupled Wassertein generative adversarial networks (WGANs) to synthesize sufficient multimodal features with semantic correlation for stable training. Then the synthetic and true multimodal features are jointly mapped to a common semantic space via an effective distribution alignment scheme, where the cross-modal correlations of different semantic features are captured and the knowledge can be transferred to the unseen classes under the cycle-consistency constraint. Experiments on four benchmark datasets for image-text retrieval and two large-scale datasets for image-sketch retrieval show the remarkable improvements achieved by our CFAS method comparing with a bundle of state-of-the-art approaches.