Spatio-Temporal Event Selection in Basic Surveillance Tasks using Eye Tracking and EEG

GazeIn '14 Pub Date : 2014-11-16 DOI:10.1145/2666642.2666645
Jutta Hild, F. Putze, David Kaufman, Christian Kühnle, Tanja Schultz, J. Beyerer
{"title":"Spatio-Temporal Event Selection in Basic Surveillance Tasks using Eye Tracking and EEG","authors":"Jutta Hild, F. Putze, David Kaufman, Christian Kühnle, Tanja Schultz, J. Beyerer","doi":"10.1145/2666642.2666645","DOIUrl":null,"url":null,"abstract":"In safety- and security-critical applications like video surveillance it is crucial that human operators detect task-relevant events in the continuous video streams and select them for report or dissemination to other authorities. Usually, the selection operation is performed using a manual input device like a mouse or a joystick. Due to the visually rich and dynamic input, the required high attention, the long working time, and the challenging manual selection of moving objects, it occurs that relevant events are missed. To alleviate this problem we propose adding another event selection process, using eye-brain input. Our approach is based on eye tracking and EEG, providing spatio-temporal event selection without any manual intervention. We report ongoing research, building on prior work where we showed the general feasibility of the approach. In this contribution, we extend our work testing the feasibility of the approach using more advanced and less artificial experimental paradigms simulating frequently occurring, basic types of real surveillance tasks. The paradigms are much closer to a real surveillance task in terms of the used visual stimuli, the more subtle cues for event indication, and the required viewing behavior. As a methodology we perform an experiment (N=10) with non-experts. The results confirm the feasibility of the approach for event selection in the advanced tasks. We achieve spatio-temporal event selection accuracy scores of up to 77% and 60% for different stages of event indication.","PeriodicalId":230150,"journal":{"name":"GazeIn '14","volume":"69 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"GazeIn '14","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2666642.2666645","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

In safety- and security-critical applications like video surveillance it is crucial that human operators detect task-relevant events in the continuous video streams and select them for report or dissemination to other authorities. Usually, the selection operation is performed using a manual input device like a mouse or a joystick. Due to the visually rich and dynamic input, the required high attention, the long working time, and the challenging manual selection of moving objects, it occurs that relevant events are missed. To alleviate this problem we propose adding another event selection process, using eye-brain input. Our approach is based on eye tracking and EEG, providing spatio-temporal event selection without any manual intervention. We report ongoing research, building on prior work where we showed the general feasibility of the approach. In this contribution, we extend our work testing the feasibility of the approach using more advanced and less artificial experimental paradigms simulating frequently occurring, basic types of real surveillance tasks. The paradigms are much closer to a real surveillance task in terms of the used visual stimuli, the more subtle cues for event indication, and the required viewing behavior. As a methodology we perform an experiment (N=10) with non-experts. The results confirm the feasibility of the approach for event selection in the advanced tasks. We achieve spatio-temporal event selection accuracy scores of up to 77% and 60% for different stages of event indication.
利用眼动跟踪和脑电图在基本监控任务中进行时空事件选择
在视频监控等安全和安保关键应用中,人工操作员在连续视频流中检测与任务相关的事件并选择它们报告或传播给其他当局是至关重要的。通常,选择操作是使用手动输入设备,如鼠标或操纵杆来执行的。由于视觉上的丰富和动态输入,需要高度关注,工作时间长,以及具有挑战性的手动选择运动对象,因此会出现错过相关事件的情况。为了解决这个问题,我们建议增加另一个事件选择过程,使用眼脑输入。我们的方法是基于眼动追踪和脑电图,提供时空事件选择,而无需任何人工干预。我们报告正在进行的研究,以先前的工作为基础,我们展示了该方法的总体可行性。在这一贡献中,我们扩展了我们的工作,使用更先进和更少人为的实验范式来模拟频繁发生的基本类型的真实监视任务,测试该方法的可行性。在使用的视觉刺激、事件指示的更微妙的线索和所需的观看行为方面,范式更接近于真实的监视任务。作为一种方法,我们与非专家进行了一个实验(N=10)。结果证实了该方法在高级任务中事件选择的可行性。我们在事件指示的不同阶段实现了高达77%和60%的时空事件选择准确率分数。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信