En Yu, Jiande Sun, Li Wang, Huaxiang Zhang, Jing Li
{"title":"模态相关跨媒体检索的耦合特征选择","authors":"En Yu, Jiande Sun, Li Wang, Huaxiang Zhang, Jing Li","doi":"10.1109/ISPACS.2017.8266495","DOIUrl":null,"url":null,"abstract":"With the explosive growth of the multi-media data, the cross-media retrieval technology has drawn much attention. Previous methods usually used the 12-norm for the regularization constraint when learning the projection matrices, which can' use the informative and discriminative features to reach the better performance. In this paper, we propose the coupled feature selection model for cross-media retrieval(CFSCR) based on the modality-dependent method. In details, the proposée framework learns two couples of projection matrices for two retrieval sub-tasks(I2T and T2I), and uses the 12ji-nom for coupled feature selection when learning the mapping matrices, which not only considers the the measure of relevan« but also aims to select informative and discriminative feature; from image and text feature spaces. Experiment results or three different dataseis demonstrate that our method perform: better than the state-of-the-art methods.","PeriodicalId":166414,"journal":{"name":"2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","volume":"77 4 Pt 1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Coupled feature selection for modality-dependent cross-media retrieval\",\"authors\":\"En Yu, Jiande Sun, Li Wang, Huaxiang Zhang, Jing Li\",\"doi\":\"10.1109/ISPACS.2017.8266495\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the explosive growth of the multi-media data, the cross-media retrieval technology has drawn much attention. Previous methods usually used the 12-norm for the regularization constraint when learning the projection matrices, which can' use the informative and discriminative features to reach the better performance. In this paper, we propose the coupled feature selection model for cross-media retrieval(CFSCR) based on the modality-dependent method. In details, the proposée framework learns two couples of projection matrices for two retrieval sub-tasks(I2T and T2I), and uses the 12ji-nom for coupled feature selection when learning the mapping matrices, which not only considers the the measure of relevan« but also aims to select informative and discriminative feature; from image and text feature spaces. Experiment results or three different dataseis demonstrate that our method perform: better than the state-of-the-art methods.\",\"PeriodicalId\":166414,\"journal\":{\"name\":\"2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)\",\"volume\":\"77 4 Pt 1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISPACS.2017.8266495\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISPACS.2017.8266495","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Coupled feature selection for modality-dependent cross-media retrieval
With the explosive growth of the multi-media data, the cross-media retrieval technology has drawn much attention. Previous methods usually used the 12-norm for the regularization constraint when learning the projection matrices, which can' use the informative and discriminative features to reach the better performance. In this paper, we propose the coupled feature selection model for cross-media retrieval(CFSCR) based on the modality-dependent method. In details, the proposée framework learns two couples of projection matrices for two retrieval sub-tasks(I2T and T2I), and uses the 12ji-nom for coupled feature selection when learning the mapping matrices, which not only considers the the measure of relevan« but also aims to select informative and discriminative feature; from image and text feature spaces. Experiment results or three different dataseis demonstrate that our method perform: better than the state-of-the-art methods.