{"title":"基于视觉词汇的视觉注意","authors":"Ma Zhong, Zhao Xin-Bo","doi":"10.1109/ICOT.2014.6954669","DOIUrl":null,"url":null,"abstract":"We aim to build a visual vocabulary by applying a model of visual attention. Concretely, we first learn a computational visual attention model from the real eye tracking data. Then using this model to find the most salient regions in the images, and extracting features from these regions to build a visual vocabulary with more expressive power. The experiment was conducted to verify the effectiveness of the proposed visual attention based visual vocabulary. The results show that the proposed vocabulary boosts the performance of the category recognition, which means the proposed vocabulary outperforms the traditional one.","PeriodicalId":343641,"journal":{"name":"2014 International Conference on Orange Technologies","volume":"111 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Visual attention based visual vocabulary\",\"authors\":\"Ma Zhong, Zhao Xin-Bo\",\"doi\":\"10.1109/ICOT.2014.6954669\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We aim to build a visual vocabulary by applying a model of visual attention. Concretely, we first learn a computational visual attention model from the real eye tracking data. Then using this model to find the most salient regions in the images, and extracting features from these regions to build a visual vocabulary with more expressive power. The experiment was conducted to verify the effectiveness of the proposed visual attention based visual vocabulary. The results show that the proposed vocabulary boosts the performance of the category recognition, which means the proposed vocabulary outperforms the traditional one.\",\"PeriodicalId\":343641,\"journal\":{\"name\":\"2014 International Conference on Orange Technologies\",\"volume\":\"111 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-11-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 International Conference on Orange Technologies\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICOT.2014.6954669\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 International Conference on Orange Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICOT.2014.6954669","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
We aim to build a visual vocabulary by applying a model of visual attention. Concretely, we first learn a computational visual attention model from the real eye tracking data. Then using this model to find the most salient regions in the images, and extracting features from these regions to build a visual vocabulary with more expressive power. The experiment was conducted to verify the effectiveness of the proposed visual attention based visual vocabulary. The results show that the proposed vocabulary boosts the performance of the category recognition, which means the proposed vocabulary outperforms the traditional one.