{"title":"Understanding Mobile Reading via Camera Based Gaze Tracking and Kinematic Touch Modeling","authors":"Wei Guo, Jingtao Wang","doi":"10.1145/3242969.3243011","DOIUrl":null,"url":null,"abstract":"Despite the ubiquity and rapid growth of mobile reading activities, researchers and practitioners today either rely on coarse-grained metrics such as click-through-rate (CTR) and dwell time, or expensive equipment such as gaze trackers to understand users' reading behavior on mobile devices. We present Lepton, an intelligent mobile reading system and a set of dual-channel sensing algorithms to achieve scalable and fine-grained understanding of users' reading behaviors, comprehension, and engagements on unmodified smartphones. Lepton tracks the periodic lateral patterns, i.e. saccade, of users' eye gaze via the front camera, and infers their muscle stiffness during text scrolling via a Mass-Spring-Damper (MSD) based kinematic model from touch events. Through a 25-participant study, we found that both the periodic saccade patterns and muscle stiffness signals captured by Lepton can be used as expressive features to infer users' comprehension and engagement in mobile reading. Overall, our new signals lead to significantly higher performances in predicting users' comprehension (correlation: 0.36 vs. 0.29), concentration (0.36 vs. 0.16), confidence (0.5 vs. 0.47), and engagement (0.34 vs. 0.16) than using traditional dwell-time based features via a user-independent model.","PeriodicalId":308751,"journal":{"name":"Proceedings of the 20th ACM International Conference on Multimodal Interaction","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 20th ACM International Conference on Multimodal Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3242969.3243011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Despite the ubiquity and rapid growth of mobile reading activities, researchers and practitioners today either rely on coarse-grained metrics such as click-through-rate (CTR) and dwell time, or expensive equipment such as gaze trackers to understand users' reading behavior on mobile devices. We present Lepton, an intelligent mobile reading system and a set of dual-channel sensing algorithms to achieve scalable and fine-grained understanding of users' reading behaviors, comprehension, and engagements on unmodified smartphones. Lepton tracks the periodic lateral patterns, i.e. saccade, of users' eye gaze via the front camera, and infers their muscle stiffness during text scrolling via a Mass-Spring-Damper (MSD) based kinematic model from touch events. Through a 25-participant study, we found that both the periodic saccade patterns and muscle stiffness signals captured by Lepton can be used as expressive features to infer users' comprehension and engagement in mobile reading. Overall, our new signals lead to significantly higher performances in predicting users' comprehension (correlation: 0.36 vs. 0.29), concentration (0.36 vs. 0.16), confidence (0.5 vs. 0.47), and engagement (0.34 vs. 0.16) than using traditional dwell-time based features via a user-independent model.
尽管移动阅读活动无处不在且增长迅速,但今天的研究人员和实践者要么依赖于粗粒度的指标,如点击率(CTR)和停留时间,要么依赖于昂贵的设备,如凝视追踪器,来了解用户在移动设备上的阅读行为。我们提出了智能移动阅读系统Lepton和一套双通道传感算法,以实现对用户在未经修改的智能手机上的阅读行为、理解和参与的可扩展和细粒度理解。Lepton通过前置摄像头跟踪用户眼球注视的周期性横向模式,即扫视,并通过基于触摸事件的质量-弹簧-阻尼器(MSD)运动学模型推断出用户在滚动文本时的肌肉僵硬度。通过一项25人参与的研究,我们发现Lepton捕获的周期性扫视模式和肌肉僵硬信号都可以作为表达特征来推断用户在移动阅读中的理解和参与程度。总体而言,我们的新信号在预测用户理解(相关性:0.36 vs. 0.29)、注意力(相关性:0.36 vs. 0.16)、置信度(相关性:0.5 vs. 0.47)和参与度(相关性:0.34 vs. 0.16)方面的表现明显高于通过用户独立模型使用传统的基于停留时间的特征。