协同眼动追踪图像分析

Brendan David-John, S. Sridharan, Reynold J. Bailey
{"title":"协同眼动追踪图像分析","authors":"Brendan David-John, S. Sridharan, Reynold J. Bailey","doi":"10.1145/2578153.2578215","DOIUrl":null,"url":null,"abstract":"We present a framework for collaborative image analysis where gaze information is shared across all users. A server gathers and broadcasts fixation data from/to all clients and the clients visualize this information. Several visualization options are provided. The system can run in real-time or gaze information can be recorded and shared the next time an image is accessed. Our framework is scalable to large numbers of clients with different eye tracking devices. To evaluate our system we used it within the context of a spot-the-differences game. Subjects were presented with 10 image pairs each containing 5 differences. They were given one minute to detect the differences in each image. Our study was divided into three sessions. In session 1, subjects completed the task individually, in session 2, pairs of subjects completed the task without gaze sharing, and in session 3, pairs of subjects completed the task with gaze sharing. We measured accuracy, time-to-completion and visual coverage over each image to evaluate the performance of subjects in each session. We found that visualizing shared gaze information by graying out previously scrutinized regions of an image significantly increases the dwell time in the areas of the images that are relevant to the task (i.e. the regions where differences actually occurred). Furthermore, accuracy and time-to-completion also improved over collaboration without gaze sharing though the effects were not significant. Our framework is useful for a wide range of image analysis applications which can benefit from a collaborative approach.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"42 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Collaborative eye tracking for image analysis\",\"authors\":\"Brendan David-John, S. Sridharan, Reynold J. Bailey\",\"doi\":\"10.1145/2578153.2578215\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present a framework for collaborative image analysis where gaze information is shared across all users. A server gathers and broadcasts fixation data from/to all clients and the clients visualize this information. Several visualization options are provided. The system can run in real-time or gaze information can be recorded and shared the next time an image is accessed. Our framework is scalable to large numbers of clients with different eye tracking devices. To evaluate our system we used it within the context of a spot-the-differences game. Subjects were presented with 10 image pairs each containing 5 differences. They were given one minute to detect the differences in each image. Our study was divided into three sessions. In session 1, subjects completed the task individually, in session 2, pairs of subjects completed the task without gaze sharing, and in session 3, pairs of subjects completed the task with gaze sharing. We measured accuracy, time-to-completion and visual coverage over each image to evaluate the performance of subjects in each session. We found that visualizing shared gaze information by graying out previously scrutinized regions of an image significantly increases the dwell time in the areas of the images that are relevant to the task (i.e. the regions where differences actually occurred). Furthermore, accuracy and time-to-completion also improved over collaboration without gaze sharing though the effects were not significant. Our framework is useful for a wide range of image analysis applications which can benefit from a collaborative approach.\",\"PeriodicalId\":142459,\"journal\":{\"name\":\"Proceedings of the Symposium on Eye Tracking Research and Applications\",\"volume\":\"42 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-03-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Symposium on Eye Tracking Research and Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2578153.2578215\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Symposium on Eye Tracking Research and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2578153.2578215","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

我们提出了一个协作图像分析框架,其中凝视信息在所有用户之间共享。服务器收集固定数据并向所有客户机广播固定数据,客户机将此信息可视化。提供了几个可视化选项。该系统可以实时运行,也可以记录凝视信息,并在下次访问图像时共享。我们的框架可扩展到具有不同眼动追踪设备的大量客户端。为了评估我们的系统,我们在一款发现差异的游戏中使用了它。受试者被呈现10对图像,每对图像包含5个差异。他们有一分钟的时间来检测每张图像的差异。我们的研究分为三个阶段。在会话1中,受试者单独完成任务,在会话2中,成对受试者完成任务而不共用目光,在会话3中,成对受试者完成任务而共用目光。我们测量了每张图像的准确性、完成时间和视觉覆盖范围,以评估受试者在每次会话中的表现。我们发现,通过将先前仔细检查过的图像区域变灰来可视化共享凝视信息,可以显着增加与任务相关的图像区域(即实际发生差异的区域)的停留时间。此外,准确性和完成时间也比没有凝视共享的合作有所提高,尽管效果并不显著。我们的框架对于广泛的图像分析应用程序是有用的,这些应用程序可以从协作方法中受益。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Collaborative eye tracking for image analysis
We present a framework for collaborative image analysis where gaze information is shared across all users. A server gathers and broadcasts fixation data from/to all clients and the clients visualize this information. Several visualization options are provided. The system can run in real-time or gaze information can be recorded and shared the next time an image is accessed. Our framework is scalable to large numbers of clients with different eye tracking devices. To evaluate our system we used it within the context of a spot-the-differences game. Subjects were presented with 10 image pairs each containing 5 differences. They were given one minute to detect the differences in each image. Our study was divided into three sessions. In session 1, subjects completed the task individually, in session 2, pairs of subjects completed the task without gaze sharing, and in session 3, pairs of subjects completed the task with gaze sharing. We measured accuracy, time-to-completion and visual coverage over each image to evaluate the performance of subjects in each session. We found that visualizing shared gaze information by graying out previously scrutinized regions of an image significantly increases the dwell time in the areas of the images that are relevant to the task (i.e. the regions where differences actually occurred). Furthermore, accuracy and time-to-completion also improved over collaboration without gaze sharing though the effects were not significant. Our framework is useful for a wide range of image analysis applications which can benefit from a collaborative approach.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信