{"title":"Multi-grained Representation Learning for Cross-modal Retrieval","authors":"Shengwei Zhao, Linhai Xu, Yuying Liu, S. Du","doi":"10.1145/3539618.3592025","DOIUrl":null,"url":null,"abstract":"The purpose of audio-text retrieval is to learn a cross-modal similarity function between audio and text, enabling a given audio/text to find similar text/audio from a candidate set. Recent audio-text retrieval models aggregate multi-modal features into a single-grained representation. However, single-grained representation is difficult to solve the situation that an audio is described by multiple texts of different granularity levels, because the association pattern between audio and text is complex. Therefore, we propose an adaptive aggregation strategy to automatically find the optimal pool function to aggregate the features into a comprehensive representation, so as to learn valuable multi-grained representation. And multi-grained comparative learning is carried out in order to focus on the complex correlation between audio and text in different granularity. Meanwhile, text-guided token interaction is used to reduce the impact of redundant audio clips. We evaluated our proposed method on two audio-text retrieval benchmark datasets of Audiocaps and Clotho, achieving the state-of-the-art results in text-to-audio and audio-to-text retrieval. Our findings emphasize the importance of learning multi-modal multi-grained representation.","PeriodicalId":425056,"journal":{"name":"Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":"238 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3539618.3592025","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The purpose of audio-text retrieval is to learn a cross-modal similarity function between audio and text, enabling a given audio/text to find similar text/audio from a candidate set. Recent audio-text retrieval models aggregate multi-modal features into a single-grained representation. However, single-grained representation is difficult to solve the situation that an audio is described by multiple texts of different granularity levels, because the association pattern between audio and text is complex. Therefore, we propose an adaptive aggregation strategy to automatically find the optimal pool function to aggregate the features into a comprehensive representation, so as to learn valuable multi-grained representation. And multi-grained comparative learning is carried out in order to focus on the complex correlation between audio and text in different granularity. Meanwhile, text-guided token interaction is used to reduce the impact of redundant audio clips. We evaluated our proposed method on two audio-text retrieval benchmark datasets of Audiocaps and Clotho, achieving the state-of-the-art results in text-to-audio and audio-to-text retrieval. Our findings emphasize the importance of learning multi-modal multi-grained representation.