{"title":"基于大迁移和广义注意卷积区域最大激活的跨模态检索","authors":"Wenwen Yang, Yan Hua","doi":"10.1145/3483845.3483872","DOIUrl":null,"url":null,"abstract":"Image-text retrieval is a challenge topic since image features are still not good enough to represent the high-level semantic information, though the representation ability is improved thanks to advances in deep learning. This paper proposes a cross-modal image-text retrieval framework (BiTGRMAC) based on big transfer and region maximum activation convolution with generalized attention, where big transfer (BiT) trained with large amount data is utilized to extract image features and fine-tuned on the cross-modal image datasets. At the same time, a new generalized attention region maximum activation convolution (GRMAC) descriptor is introduced into BiT, which can generate image features through attention mechanism, then reduce the influence of background clustering and highlight the target. For texts, the widely used Sentence CNN is adopted to extract text features. The parameters of image and text deep models are learned by minimizing a cross-modal loss function in an end-to-end framework. Experimental results show that this method can effectively improve the accuracy of retrieval on three widely used datasets.","PeriodicalId":134636,"journal":{"name":"Proceedings of the 2021 2nd International Conference on Control, Robotics and Intelligent System","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cross-modal Retrieval based on Big Transfer and Regional Maximum Activation of Convolutions with Generalized Attention\",\"authors\":\"Wenwen Yang, Yan Hua\",\"doi\":\"10.1145/3483845.3483872\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Image-text retrieval is a challenge topic since image features are still not good enough to represent the high-level semantic information, though the representation ability is improved thanks to advances in deep learning. This paper proposes a cross-modal image-text retrieval framework (BiTGRMAC) based on big transfer and region maximum activation convolution with generalized attention, where big transfer (BiT) trained with large amount data is utilized to extract image features and fine-tuned on the cross-modal image datasets. At the same time, a new generalized attention region maximum activation convolution (GRMAC) descriptor is introduced into BiT, which can generate image features through attention mechanism, then reduce the influence of background clustering and highlight the target. For texts, the widely used Sentence CNN is adopted to extract text features. The parameters of image and text deep models are learned by minimizing a cross-modal loss function in an end-to-end framework. Experimental results show that this method can effectively improve the accuracy of retrieval on three widely used datasets.\",\"PeriodicalId\":134636,\"journal\":{\"name\":\"Proceedings of the 2021 2nd International Conference on Control, Robotics and Intelligent System\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-08-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2021 2nd International Conference on Control, Robotics and Intelligent System\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3483845.3483872\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2021 2nd International Conference on Control, Robotics and Intelligent System","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3483845.3483872","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Cross-modal Retrieval based on Big Transfer and Regional Maximum Activation of Convolutions with Generalized Attention
Image-text retrieval is a challenge topic since image features are still not good enough to represent the high-level semantic information, though the representation ability is improved thanks to advances in deep learning. This paper proposes a cross-modal image-text retrieval framework (BiTGRMAC) based on big transfer and region maximum activation convolution with generalized attention, where big transfer (BiT) trained with large amount data is utilized to extract image features and fine-tuned on the cross-modal image datasets. At the same time, a new generalized attention region maximum activation convolution (GRMAC) descriptor is introduced into BiT, which can generate image features through attention mechanism, then reduce the influence of background clustering and highlight the target. For texts, the widely used Sentence CNN is adopted to extract text features. The parameters of image and text deep models are learned by minimizing a cross-modal loss function in an end-to-end framework. Experimental results show that this method can effectively improve the accuracy of retrieval on three widely used datasets.