Pallab Kanti Podder, M. Paul, Tanmoy Debnath, M. Murshed
{"title":"An Analysis of Human Engagement Behaviour Using Descriptors from Human Feedback, Eye Tracking, and Saliency Modelling","authors":"Pallab Kanti Podder, M. Paul, Tanmoy Debnath, M. Murshed","doi":"10.1109/DICTA.2015.7371227","DOIUrl":null,"url":null,"abstract":"In this paper an analysis of human engagement behaviour with video is presented based on real life experiments. An engagement model could be employed in classroom education, enhancing programming skills, reading etc. Two groups of people, independent of one another, watched eighteen video clips separately at different times. The first group's participants' eye gaze locations, right and left pupil sizes, and eye blinking patterns are recorded by a state of the art Tobii eye tracker. The second group of people who are video experts opined about the most significant attention points of the videos. A well-known bottom-up visual saliency model, Graph-Based Visual Saliency (GBVS), is also utilized to create salient points for the videos. Taking into consideration all the above mentioned descriptors the introduced behaviour analysis demonstrates the level of participants' concentration with the videos.","PeriodicalId":214897,"journal":{"name":"2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DICTA.2015.7371227","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11
Abstract
In this paper an analysis of human engagement behaviour with video is presented based on real life experiments. An engagement model could be employed in classroom education, enhancing programming skills, reading etc. Two groups of people, independent of one another, watched eighteen video clips separately at different times. The first group's participants' eye gaze locations, right and left pupil sizes, and eye blinking patterns are recorded by a state of the art Tobii eye tracker. The second group of people who are video experts opined about the most significant attention points of the videos. A well-known bottom-up visual saliency model, Graph-Based Visual Saliency (GBVS), is also utilized to create salient points for the videos. Taking into consideration all the above mentioned descriptors the introduced behaviour analysis demonstrates the level of participants' concentration with the videos.