Sandeep Vidyapu, Vijaya Saradhi Vedula, S. Bhattacharya
{"title":"基于多类支持向量机的网页图像视觉注意力定量预测","authors":"Sandeep Vidyapu, Vijaya Saradhi Vedula, S. Bhattacharya","doi":"10.1145/3317960.3321614","DOIUrl":null,"url":null,"abstract":"Webpage images---image elements on a webpage---are prominent to draw user attention. Modeling attention on webpage images helps in their synthesis and rendering. This paper presents a visual feature-based attention prediction model for webpage images. Firstly, fixated images were assigned quantitative visual attention based on users' sequential attention allocation on webpages. Subsequently, fixated images' intrinsic visual features were extracted along with position and size on respective webpages. A multiclass support vector machine (multiclass SVM) was learned using the visual features and associated attention. In tandem, a majority-voting-scheme was employed to predict the quantitative visual attention for test webpage images. The proposed approach was analyzed through an eye-tracking experiment conducted on 36 real-world webpages with 42 participants. Our model outperforms (average accuracy of 91.64% and micro F1-score of 79.1%) the existing position and size constrained regression model (average accuracy of 73.92% and micro F1-score of 34.80%).","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"93 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Quantitative visual attention prediction on webpage images using multiclass SVM\",\"authors\":\"Sandeep Vidyapu, Vijaya Saradhi Vedula, S. Bhattacharya\",\"doi\":\"10.1145/3317960.3321614\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Webpage images---image elements on a webpage---are prominent to draw user attention. Modeling attention on webpage images helps in their synthesis and rendering. This paper presents a visual feature-based attention prediction model for webpage images. Firstly, fixated images were assigned quantitative visual attention based on users' sequential attention allocation on webpages. Subsequently, fixated images' intrinsic visual features were extracted along with position and size on respective webpages. A multiclass support vector machine (multiclass SVM) was learned using the visual features and associated attention. In tandem, a majority-voting-scheme was employed to predict the quantitative visual attention for test webpage images. The proposed approach was analyzed through an eye-tracking experiment conducted on 36 real-world webpages with 42 participants. Our model outperforms (average accuracy of 91.64% and micro F1-score of 79.1%) the existing position and size constrained regression model (average accuracy of 73.92% and micro F1-score of 34.80%).\",\"PeriodicalId\":161901,\"journal\":{\"name\":\"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications\",\"volume\":\"93 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-06-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3317960.3321614\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3317960.3321614","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Quantitative visual attention prediction on webpage images using multiclass SVM
Webpage images---image elements on a webpage---are prominent to draw user attention. Modeling attention on webpage images helps in their synthesis and rendering. This paper presents a visual feature-based attention prediction model for webpage images. Firstly, fixated images were assigned quantitative visual attention based on users' sequential attention allocation on webpages. Subsequently, fixated images' intrinsic visual features were extracted along with position and size on respective webpages. A multiclass support vector machine (multiclass SVM) was learned using the visual features and associated attention. In tandem, a majority-voting-scheme was employed to predict the quantitative visual attention for test webpage images. The proposed approach was analyzed through an eye-tracking experiment conducted on 36 real-world webpages with 42 participants. Our model outperforms (average accuracy of 91.64% and micro F1-score of 79.1%) the existing position and size constrained regression model (average accuracy of 73.92% and micro F1-score of 34.80%).