An Analysis of Human Engagement Behaviour Using Descriptors from Human Feedback, Eye Tracking, and Saliency Modelling

Pallab Kanti Podder, M. Paul, Tanmoy Debnath, M. Murshed
{"title":"An Analysis of Human Engagement Behaviour Using Descriptors from Human Feedback, Eye Tracking, and Saliency Modelling","authors":"Pallab Kanti Podder, M. Paul, Tanmoy Debnath, M. Murshed","doi":"10.1109/DICTA.2015.7371227","DOIUrl":null,"url":null,"abstract":"In this paper an analysis of human engagement behaviour with video is presented based on real life experiments. An engagement model could be employed in classroom education, enhancing programming skills, reading etc. Two groups of people, independent of one another, watched eighteen video clips separately at different times. The first group's participants' eye gaze locations, right and left pupil sizes, and eye blinking patterns are recorded by a state of the art Tobii eye tracker. The second group of people who are video experts opined about the most significant attention points of the videos. A well-known bottom-up visual saliency model, Graph-Based Visual Saliency (GBVS), is also utilized to create salient points for the videos. Taking into consideration all the above mentioned descriptors the introduced behaviour analysis demonstrates the level of participants' concentration with the videos.","PeriodicalId":214897,"journal":{"name":"2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DICTA.2015.7371227","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11

Abstract

In this paper an analysis of human engagement behaviour with video is presented based on real life experiments. An engagement model could be employed in classroom education, enhancing programming skills, reading etc. Two groups of people, independent of one another, watched eighteen video clips separately at different times. The first group's participants' eye gaze locations, right and left pupil sizes, and eye blinking patterns are recorded by a state of the art Tobii eye tracker. The second group of people who are video experts opined about the most significant attention points of the videos. A well-known bottom-up visual saliency model, Graph-Based Visual Saliency (GBVS), is also utilized to create salient points for the videos. Taking into consideration all the above mentioned descriptors the introduced behaviour analysis demonstrates the level of participants' concentration with the videos.
使用来自人类反馈、眼动追踪和显著性建模的描述符分析人类参与行为
本文在现实生活实验的基础上,提出了一种基于视频的人类参与行为分析方法。参与模式可用于课堂教育,提高编程技能,阅读等。两组相互独立的人分别在不同的时间观看了18个视频片段。第一组参与者的眼睛注视位置、左右瞳孔大小和眨眼模式都被最先进的Tobii眼动仪记录下来。第二组人是视频专家,他们认为视频中最重要的关注点是什么。一个著名的自下而上的视觉显著性模型,基于图形的视觉显著性(GBVS),也被用来为视频创建突出点。考虑到上述所有描述符,引入的行为分析展示了参与者对视频的集中程度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信