基于多类支持向量机的网页图像视觉注意力定量预测

Sandeep Vidyapu, Vijaya Saradhi Vedula, S. Bhattacharya
{"title":"基于多类支持向量机的网页图像视觉注意力定量预测","authors":"Sandeep Vidyapu, Vijaya Saradhi Vedula, S. Bhattacharya","doi":"10.1145/3317960.3321614","DOIUrl":null,"url":null,"abstract":"Webpage images---image elements on a webpage---are prominent to draw user attention. Modeling attention on webpage images helps in their synthesis and rendering. This paper presents a visual feature-based attention prediction model for webpage images. Firstly, fixated images were assigned quantitative visual attention based on users' sequential attention allocation on webpages. Subsequently, fixated images' intrinsic visual features were extracted along with position and size on respective webpages. A multiclass support vector machine (multiclass SVM) was learned using the visual features and associated attention. In tandem, a majority-voting-scheme was employed to predict the quantitative visual attention for test webpage images. The proposed approach was analyzed through an eye-tracking experiment conducted on 36 real-world webpages with 42 participants. Our model outperforms (average accuracy of 91.64% and micro F1-score of 79.1%) the existing position and size constrained regression model (average accuracy of 73.92% and micro F1-score of 34.80%).","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"93 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Quantitative visual attention prediction on webpage images using multiclass SVM\",\"authors\":\"Sandeep Vidyapu, Vijaya Saradhi Vedula, S. Bhattacharya\",\"doi\":\"10.1145/3317960.3321614\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Webpage images---image elements on a webpage---are prominent to draw user attention. Modeling attention on webpage images helps in their synthesis and rendering. This paper presents a visual feature-based attention prediction model for webpage images. Firstly, fixated images were assigned quantitative visual attention based on users' sequential attention allocation on webpages. Subsequently, fixated images' intrinsic visual features were extracted along with position and size on respective webpages. A multiclass support vector machine (multiclass SVM) was learned using the visual features and associated attention. In tandem, a majority-voting-scheme was employed to predict the quantitative visual attention for test webpage images. The proposed approach was analyzed through an eye-tracking experiment conducted on 36 real-world webpages with 42 participants. Our model outperforms (average accuracy of 91.64% and micro F1-score of 79.1%) the existing position and size constrained regression model (average accuracy of 73.92% and micro F1-score of 34.80%).\",\"PeriodicalId\":161901,\"journal\":{\"name\":\"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications\",\"volume\":\"93 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-06-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3317960.3321614\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3317960.3321614","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

摘要

网页图像——网页上的图像元素——是吸引用户注意力的重要元素。对网页图像的建模关注有助于它们的合成和渲染。提出了一种基于视觉特征的网页图像注意力预测模型。首先,根据用户在网页上的顺序注意力分配,对固定图像进行定量的视觉注意分配;随后,提取注视图像的内在视觉特征以及在各自网页上的位置和大小。利用视觉特征和相关注意学习多类支持向量机(multiclass SVM)。同时,采用多数投票方案预测测试网页图像的定量视觉注意力。通过对42名参与者在36个真实网页上进行的眼球追踪实验,对所提出的方法进行了分析。我们的模型优于现有的位置和规模约束回归模型(平均准确率为91.64%,微观f1得分为79.1%)(平均准确率为73.92%,微观f1得分为34.80%)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Quantitative visual attention prediction on webpage images using multiclass SVM
Webpage images---image elements on a webpage---are prominent to draw user attention. Modeling attention on webpage images helps in their synthesis and rendering. This paper presents a visual feature-based attention prediction model for webpage images. Firstly, fixated images were assigned quantitative visual attention based on users' sequential attention allocation on webpages. Subsequently, fixated images' intrinsic visual features were extracted along with position and size on respective webpages. A multiclass support vector machine (multiclass SVM) was learned using the visual features and associated attention. In tandem, a majority-voting-scheme was employed to predict the quantitative visual attention for test webpage images. The proposed approach was analyzed through an eye-tracking experiment conducted on 36 real-world webpages with 42 participants. Our model outperforms (average accuracy of 91.64% and micro F1-score of 79.1%) the existing position and size constrained regression model (average accuracy of 73.92% and micro F1-score of 34.80%).
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信